00:00:00.001 Started by upstream project "autotest-per-patch" build number 132369 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.179 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.180 The recommended git tool is: git 00:00:00.180 using credential 00000000-0000-0000-0000-000000000002 00:00:00.184 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.209 Fetching changes from the remote Git repository 00:00:00.213 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.233 Using shallow fetch with depth 1 00:00:00.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.233 > git --version # timeout=10 00:00:00.255 > git --version # 'git version 2.39.2' 00:00:00.255 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.950 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.960 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.970 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.970 > git config core.sparsecheckout # timeout=10 00:00:05.980 > git read-tree -mu HEAD # timeout=10 00:00:05.996 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.020 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.020 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.128 [Pipeline] Start of Pipeline 00:00:06.146 [Pipeline] library 00:00:06.148 Loading library shm_lib@master 00:00:06.148 Library shm_lib@master is cached. Copying from home. 00:00:06.169 [Pipeline] node 00:00:06.181 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.183 [Pipeline] { 00:00:06.194 [Pipeline] catchError 00:00:06.196 [Pipeline] { 00:00:06.209 [Pipeline] wrap 00:00:06.218 [Pipeline] { 00:00:06.226 [Pipeline] stage 00:00:06.227 [Pipeline] { (Prologue) 00:00:06.244 [Pipeline] echo 00:00:06.246 Node: VM-host-SM38 00:00:06.252 [Pipeline] cleanWs 00:00:06.263 [WS-CLEANUP] Deleting project workspace... 00:00:06.263 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.269 [WS-CLEANUP] done 00:00:06.489 [Pipeline] setCustomBuildProperty 00:00:06.562 [Pipeline] httpRequest 00:00:06.908 [Pipeline] echo 00:00:06.910 Sorcerer 10.211.164.20 is alive 00:00:06.921 [Pipeline] retry 00:00:06.923 [Pipeline] { 00:00:06.935 [Pipeline] httpRequest 00:00:06.940 HttpMethod: GET 00:00:06.940 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.941 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.949 Response Code: HTTP/1.1 200 OK 00:00:06.950 Success: Status code 200 is in the accepted range: 200,404 00:00:06.950 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.879 [Pipeline] } 00:00:21.897 [Pipeline] // retry 00:00:21.905 [Pipeline] sh 00:00:22.190 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.211 [Pipeline] httpRequest 00:00:22.634 [Pipeline] echo 00:00:22.636 Sorcerer 10.211.164.20 is alive 00:00:22.646 [Pipeline] retry 00:00:22.648 [Pipeline] { 00:00:22.664 [Pipeline] httpRequest 00:00:22.670 HttpMethod: GET 00:00:22.670 URL: http://10.211.164.20/packages/spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:00:22.671 Sending request to url: http://10.211.164.20/packages/spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:00:22.694 Response Code: HTTP/1.1 200 OK 00:00:22.695 Success: Status code 200 is in the accepted range: 200,404 00:00:22.696 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:01:16.911 [Pipeline] } 00:01:16.928 [Pipeline] // retry 00:01:16.937 [Pipeline] sh 00:01:17.224 + tar --no-same-owner -xf spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:01:20.560 [Pipeline] sh 00:01:20.845 + git -C spdk log --oneline -n5 00:01:20.845 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:01:20.845 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:01:20.845 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:01:20.845 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:01:20.845 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:01:20.866 [Pipeline] writeFile 00:01:20.883 [Pipeline] sh 00:01:21.169 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.183 [Pipeline] sh 00:01:21.470 + cat autorun-spdk.conf 00:01:21.470 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.470 SPDK_TEST_NVME=1 00:01:21.470 SPDK_TEST_FTL=1 00:01:21.470 SPDK_TEST_ISAL=1 00:01:21.470 SPDK_RUN_ASAN=1 00:01:21.470 SPDK_RUN_UBSAN=1 00:01:21.470 SPDK_TEST_XNVME=1 00:01:21.470 SPDK_TEST_NVME_FDP=1 00:01:21.470 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.479 RUN_NIGHTLY=0 00:01:21.481 [Pipeline] } 00:01:21.496 [Pipeline] // stage 00:01:21.513 [Pipeline] stage 00:01:21.515 [Pipeline] { (Run VM) 00:01:21.529 [Pipeline] sh 00:01:21.815 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.816 + echo 'Start stage prepare_nvme.sh' 00:01:21.816 Start stage prepare_nvme.sh 00:01:21.816 + [[ -n 5 ]] 00:01:21.816 + disk_prefix=ex5 00:01:21.816 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:21.816 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:21.816 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:21.816 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.816 ++ SPDK_TEST_NVME=1 00:01:21.816 ++ SPDK_TEST_FTL=1 00:01:21.816 ++ SPDK_TEST_ISAL=1 00:01:21.816 ++ SPDK_RUN_ASAN=1 00:01:21.816 ++ SPDK_RUN_UBSAN=1 00:01:21.816 ++ SPDK_TEST_XNVME=1 00:01:21.816 ++ SPDK_TEST_NVME_FDP=1 00:01:21.816 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.816 ++ RUN_NIGHTLY=0 00:01:21.816 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:21.816 + nvme_files=() 00:01:21.816 + declare -A nvme_files 00:01:21.816 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.816 + nvme_files['nvme.img']=5G 00:01:21.816 + nvme_files['nvme-cmb.img']=5G 00:01:21.816 + nvme_files['nvme-multi0.img']=4G 00:01:21.816 + nvme_files['nvme-multi1.img']=4G 00:01:21.816 + nvme_files['nvme-multi2.img']=4G 00:01:21.816 + nvme_files['nvme-openstack.img']=8G 00:01:21.816 + nvme_files['nvme-zns.img']=5G 00:01:21.816 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.816 + (( SPDK_TEST_FTL == 1 )) 00:01:21.816 + nvme_files["nvme-ftl.img"]=6G 00:01:21.816 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.816 + nvme_files["nvme-fdp.img"]=1G 00:01:21.816 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.816 + for nvme in "${!nvme_files[@]}" 00:01:21.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:21.816 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.816 + for nvme in "${!nvme_files[@]}" 00:01:21.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:21.816 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:21.816 + for nvme in "${!nvme_files[@]}" 00:01:21.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:21.816 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.816 + for nvme in "${!nvme_files[@]}" 00:01:21.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:21.816 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:21.816 + for nvme in "${!nvme_files[@]}" 00:01:21.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:22.389 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.389 + for nvme in "${!nvme_files[@]}" 00:01:22.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:22.389 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.650 + for nvme in "${!nvme_files[@]}" 00:01:22.650 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:22.650 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.650 + for nvme in "${!nvme_files[@]}" 00:01:22.650 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:22.650 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:22.650 + for nvme in "${!nvme_files[@]}" 00:01:22.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:23.221 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.221 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:23.221 + echo 'End stage prepare_nvme.sh' 00:01:23.221 End stage prepare_nvme.sh 00:01:23.231 [Pipeline] sh 00:01:23.515 + DISTRO=fedora39 00:01:23.515 + CPUS=10 00:01:23.515 + RAM=12288 00:01:23.515 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.515 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:23.515 00:01:23.515 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:23.515 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:23.515 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:23.515 HELP=0 00:01:23.515 DRY_RUN=0 00:01:23.515 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:23.515 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:23.515 NVME_AUTO_CREATE=0 00:01:23.515 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:23.515 NVME_CMB=,,,, 00:01:23.515 NVME_PMR=,,,, 00:01:23.515 NVME_ZNS=,,,, 00:01:23.515 NVME_MS=true,,,, 00:01:23.515 NVME_FDP=,,,on, 00:01:23.515 SPDK_VAGRANT_DISTRO=fedora39 00:01:23.516 SPDK_VAGRANT_VMCPU=10 00:01:23.516 SPDK_VAGRANT_VMRAM=12288 00:01:23.516 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.516 SPDK_VAGRANT_HTTP_PROXY= 00:01:23.516 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.516 SPDK_OPENSTACK_NETWORK=0 00:01:23.516 VAGRANT_PACKAGE_BOX=0 00:01:23.516 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.516 FORCE_DISTRO=true 00:01:23.516 VAGRANT_BOX_VERSION= 00:01:23.516 EXTRA_VAGRANTFILES= 00:01:23.516 NIC_MODEL=e1000 00:01:23.516 00:01:23.516 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:23.516 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:26.066 Bringing machine 'default' up with 'libvirt' provider... 00:01:26.638 ==> default: Creating image (snapshot of base box volume). 00:01:26.638 ==> default: Creating domain with the following settings... 00:01:26.638 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732094031_a6ea3b62b8eef73c875e 00:01:26.638 ==> default: -- Domain type: kvm 00:01:26.638 ==> default: -- Cpus: 10 00:01:26.638 ==> default: -- Feature: acpi 00:01:26.638 ==> default: -- Feature: apic 00:01:26.638 ==> default: -- Feature: pae 00:01:26.638 ==> default: -- Memory: 12288M 00:01:26.638 ==> default: -- Memory Backing: hugepages: 00:01:26.638 ==> default: -- Management MAC: 00:01:26.638 ==> default: -- Loader: 00:01:26.638 ==> default: -- Nvram: 00:01:26.638 ==> default: -- Base box: spdk/fedora39 00:01:26.638 ==> default: -- Storage pool: default 00:01:26.638 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732094031_a6ea3b62b8eef73c875e.img (20G) 00:01:26.638 ==> default: -- Volume Cache: default 00:01:26.638 ==> default: -- Kernel: 00:01:26.638 ==> default: -- Initrd: 00:01:26.638 ==> default: -- Graphics Type: vnc 00:01:26.638 ==> default: -- Graphics Port: -1 00:01:26.638 ==> default: -- Graphics IP: 127.0.0.1 00:01:26.638 ==> default: -- Graphics Password: Not defined 00:01:26.638 ==> default: -- Video Type: cirrus 00:01:26.638 ==> default: -- Video VRAM: 9216 00:01:26.638 ==> default: -- Sound Type: 00:01:26.638 ==> default: -- Keymap: en-us 00:01:26.638 ==> default: -- TPM Path: 00:01:26.638 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:26.638 ==> default: -- Command line args: 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:26.638 ==> default: -> value=-drive, 00:01:26.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:26.638 ==> default: -> value=-drive, 00:01:26.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:26.638 ==> default: -> value=-drive, 00:01:26.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.638 ==> default: -> value=-drive, 00:01:26.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.638 ==> default: -> value=-drive, 00:01:26.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:26.638 ==> default: -> value=-drive, 00:01:26.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:26.638 ==> default: -> value=-device, 00:01:26.638 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.900 ==> default: Creating shared folders metadata... 00:01:26.900 ==> default: Starting domain. 00:01:28.816 ==> default: Waiting for domain to get an IP address... 00:01:50.782 ==> default: Waiting for SSH to become available... 00:01:50.782 ==> default: Configuring and enabling network interfaces... 00:01:53.328 default: SSH address: 192.168.121.229:22 00:01:53.328 default: SSH username: vagrant 00:01:53.328 default: SSH auth method: private key 00:01:55.259 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:05.259 ==> default: Mounting SSHFS shared folder... 00:02:06.203 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:06.203 ==> default: Checking Mount.. 00:02:07.146 ==> default: Folder Successfully Mounted! 00:02:07.146 00:02:07.146 SUCCESS! 00:02:07.146 00:02:07.146 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:07.146 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.146 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:07.146 00:02:07.159 [Pipeline] } 00:02:07.175 [Pipeline] // stage 00:02:07.184 [Pipeline] dir 00:02:07.184 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:07.186 [Pipeline] { 00:02:07.198 [Pipeline] catchError 00:02:07.199 [Pipeline] { 00:02:07.212 [Pipeline] sh 00:02:07.498 + vagrant ssh-config --host vagrant 00:02:07.498 + sed -ne '/^Host/,$p' 00:02:07.498 + tee ssh_conf 00:02:10.824 Host vagrant 00:02:10.824 HostName 192.168.121.229 00:02:10.824 User vagrant 00:02:10.824 Port 22 00:02:10.824 UserKnownHostsFile /dev/null 00:02:10.824 StrictHostKeyChecking no 00:02:10.824 PasswordAuthentication no 00:02:10.824 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:10.824 IdentitiesOnly yes 00:02:10.824 LogLevel FATAL 00:02:10.824 ForwardAgent yes 00:02:10.824 ForwardX11 yes 00:02:10.824 00:02:10.841 [Pipeline] withEnv 00:02:10.843 [Pipeline] { 00:02:10.859 [Pipeline] sh 00:02:11.147 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:11.147 source /etc/os-release 00:02:11.147 [[ -e /image.version ]] && img=$(< /image.version) 00:02:11.147 # Minimal, systemd-like check. 00:02:11.147 if [[ -e /.dockerenv ]]; then 00:02:11.147 # Clear garbage from the node'\''s name: 00:02:11.147 # agt-er_autotest_547-896 -> autotest_547-896 00:02:11.147 # $HOSTNAME is the actual container id 00:02:11.147 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:11.147 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:11.147 # We can assume this is a mount from a host where container is running, 00:02:11.147 # so fetch its hostname to easily identify the target swarm worker. 00:02:11.147 container="$(< /etc/hostname) ($agent)" 00:02:11.147 else 00:02:11.147 # Fallback 00:02:11.147 container=$agent 00:02:11.147 fi 00:02:11.147 fi 00:02:11.147 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:11.147 ' 00:02:11.451 [Pipeline] } 00:02:11.470 [Pipeline] // withEnv 00:02:11.478 [Pipeline] setCustomBuildProperty 00:02:11.494 [Pipeline] stage 00:02:11.497 [Pipeline] { (Tests) 00:02:11.514 [Pipeline] sh 00:02:11.803 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:12.081 [Pipeline] sh 00:02:12.367 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:12.645 [Pipeline] timeout 00:02:12.645 Timeout set to expire in 50 min 00:02:12.647 [Pipeline] { 00:02:12.660 [Pipeline] sh 00:02:12.947 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:13.518 HEAD is now at 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:02:13.529 [Pipeline] sh 00:02:13.811 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:14.085 [Pipeline] sh 00:02:14.404 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.433 [Pipeline] sh 00:02:14.720 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:14.982 ++ readlink -f spdk_repo 00:02:14.982 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.982 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.982 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.982 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.982 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.982 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.982 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.982 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:14.982 + cd /home/vagrant/spdk_repo 00:02:14.982 + source /etc/os-release 00:02:14.982 ++ NAME='Fedora Linux' 00:02:14.982 ++ VERSION='39 (Cloud Edition)' 00:02:14.982 ++ ID=fedora 00:02:14.982 ++ VERSION_ID=39 00:02:14.982 ++ VERSION_CODENAME= 00:02:14.982 ++ PLATFORM_ID=platform:f39 00:02:14.982 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.982 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.983 ++ LOGO=fedora-logo-icon 00:02:14.983 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.983 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.983 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.983 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.983 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.983 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.983 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.983 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.983 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.983 ++ SUPPORT_END=2024-11-12 00:02:14.983 ++ VARIANT='Cloud Edition' 00:02:14.983 ++ VARIANT_ID=cloud 00:02:14.983 + uname -a 00:02:14.983 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.983 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:15.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:15.505 Hugepages 00:02:15.505 node hugesize free / total 00:02:15.765 node0 1048576kB 0 / 0 00:02:15.765 node0 2048kB 0 / 0 00:02:15.765 00:02:15.765 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.766 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:15.766 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:15.766 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:15.766 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:15.766 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:15.766 + rm -f /tmp/spdk-ld-path 00:02:15.766 + source autorun-spdk.conf 00:02:15.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.766 ++ SPDK_TEST_NVME=1 00:02:15.766 ++ SPDK_TEST_FTL=1 00:02:15.766 ++ SPDK_TEST_ISAL=1 00:02:15.766 ++ SPDK_RUN_ASAN=1 00:02:15.766 ++ SPDK_RUN_UBSAN=1 00:02:15.766 ++ SPDK_TEST_XNVME=1 00:02:15.766 ++ SPDK_TEST_NVME_FDP=1 00:02:15.766 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.766 ++ RUN_NIGHTLY=0 00:02:15.766 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.766 + [[ -n '' ]] 00:02:15.766 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:15.766 + for M in /var/spdk/build-*-manifest.txt 00:02:15.766 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:15.766 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.766 + for M in /var/spdk/build-*-manifest.txt 00:02:15.766 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.766 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.766 + for M in /var/spdk/build-*-manifest.txt 00:02:15.766 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.766 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.766 ++ uname 00:02:15.766 + [[ Linux == \L\i\n\u\x ]] 00:02:15.766 + sudo dmesg -T 00:02:15.766 + sudo dmesg --clear 00:02:16.027 + dmesg_pid=5019 00:02:16.027 + [[ Fedora Linux == FreeBSD ]] 00:02:16.027 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.027 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.027 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:16.027 + [[ -x /usr/src/fio-static/fio ]] 00:02:16.027 + sudo dmesg -Tw 00:02:16.027 + export FIO_BIN=/usr/src/fio-static/fio 00:02:16.027 + FIO_BIN=/usr/src/fio-static/fio 00:02:16.027 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:16.027 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:16.027 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:16.027 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.027 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.027 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:16.027 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.027 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.027 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.027 09:14:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:16.027 09:14:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:16.027 09:14:41 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:16.027 09:14:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:16.027 09:14:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.027 09:14:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:16.027 09:14:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:16.027 09:14:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:16.027 09:14:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:16.027 09:14:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.027 09:14:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.027 09:14:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.027 09:14:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.027 09:14:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.027 09:14:41 -- paths/export.sh@5 -- $ export PATH 00:02:16.027 09:14:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.027 09:14:41 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:16.027 09:14:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:16.027 09:14:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732094081.XXXXXX 00:02:16.027 09:14:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732094081.uZvCrr 00:02:16.027 09:14:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:16.027 09:14:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:16.027 09:14:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:16.027 09:14:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:16.027 09:14:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:16.027 09:14:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:16.027 09:14:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:16.027 09:14:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.027 09:14:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:16.027 09:14:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:16.027 09:14:41 -- pm/common@17 -- $ local monitor 00:02:16.027 09:14:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.027 09:14:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.027 09:14:41 -- pm/common@25 -- $ sleep 1 00:02:16.027 09:14:41 -- pm/common@21 -- $ date +%s 00:02:16.027 09:14:41 -- pm/common@21 -- $ date +%s 00:02:16.028 09:14:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732094081 00:02:16.028 09:14:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732094081 00:02:16.028 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732094081_collect-cpu-load.pm.log 00:02:16.028 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732094081_collect-vmstat.pm.log 00:02:16.972 09:14:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:16.972 09:14:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.972 09:14:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.972 09:14:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.972 09:14:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.972 Wed Nov 20 09:14:42 AM UTC 2024 00:02:16.972 09:14:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.972 v25.01-pre-205-g2741dd1ac 00:02:16.972 09:14:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:16.972 09:14:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:16.972 09:14:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:16.972 09:14:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.233 09:14:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.233 ************************************ 00:02:17.233 START TEST asan 00:02:17.233 ************************************ 00:02:17.233 using asan 00:02:17.233 09:14:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:17.233 00:02:17.233 real 0m0.000s 00:02:17.233 user 0m0.000s 00:02:17.233 sys 0m0.000s 00:02:17.233 ************************************ 00:02:17.233 END TEST asan 00:02:17.233 ************************************ 00:02:17.233 09:14:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:17.233 09:14:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.233 09:14:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:17.233 09:14:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:17.233 09:14:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.233 09:14:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.233 09:14:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.233 ************************************ 00:02:17.233 START TEST ubsan 00:02:17.233 ************************************ 00:02:17.233 using ubsan 00:02:17.233 ************************************ 00:02:17.233 END TEST ubsan 00:02:17.233 ************************************ 00:02:17.233 09:14:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:17.233 00:02:17.233 real 0m0.000s 00:02:17.233 user 0m0.000s 00:02:17.233 sys 0m0.000s 00:02:17.233 09:14:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:17.233 09:14:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.233 09:14:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:17.233 09:14:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:17.233 09:14:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:17.233 09:14:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:17.233 09:14:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:17.233 09:14:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:17.233 09:14:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:17.233 09:14:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:17.233 09:14:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:17.233 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:17.233 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.805 Using 'verbs' RDMA provider 00:02:30.977 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:40.970 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:40.970 Creating mk/config.mk...done. 00:02:40.970 Creating mk/cc.flags.mk...done. 00:02:40.970 Type 'make' to build. 00:02:40.970 09:15:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:40.970 09:15:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.970 09:15:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.970 09:15:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.970 ************************************ 00:02:40.970 START TEST make 00:02:40.970 ************************************ 00:02:40.970 09:15:05 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:40.970 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:40.970 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:40.970 meson setup builddir \ 00:02:40.970 -Dwith-libaio=enabled \ 00:02:40.970 -Dwith-liburing=enabled \ 00:02:40.970 -Dwith-libvfn=disabled \ 00:02:40.970 -Dwith-spdk=disabled \ 00:02:40.970 -Dexamples=false \ 00:02:40.970 -Dtests=false \ 00:02:40.970 -Dtools=false && \ 00:02:40.970 meson compile -C builddir && \ 00:02:40.970 cd -) 00:02:40.970 make[1]: Nothing to be done for 'all'. 00:02:42.398 The Meson build system 00:02:42.398 Version: 1.5.0 00:02:42.398 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:42.398 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:42.398 Build type: native build 00:02:42.398 Project name: xnvme 00:02:42.398 Project version: 0.7.5 00:02:42.398 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.398 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.398 Host machine cpu family: x86_64 00:02:42.398 Host machine cpu: x86_64 00:02:42.398 Message: host_machine.system: linux 00:02:42.398 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:42.398 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:42.398 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:42.398 Run-time dependency threads found: YES 00:02:42.398 Has header "setupapi.h" : NO 00:02:42.398 Has header "linux/blkzoned.h" : YES 00:02:42.398 Has header "linux/blkzoned.h" : YES (cached) 00:02:42.398 Has header "libaio.h" : YES 00:02:42.398 Library aio found: YES 00:02:42.398 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.398 Run-time dependency liburing found: YES 2.2 00:02:42.398 Dependency libvfn skipped: feature with-libvfn disabled 00:02:42.398 Found CMake: /usr/bin/cmake (3.27.7) 00:02:42.398 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:42.398 Subproject spdk : skipped: feature with-spdk disabled 00:02:42.398 Run-time dependency appleframeworks found: NO (tried framework) 00:02:42.398 Run-time dependency appleframeworks found: NO (tried framework) 00:02:42.398 Library rt found: YES 00:02:42.398 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:42.398 Configuring xnvme_config.h using configuration 00:02:42.398 Configuring xnvme.spec using configuration 00:02:42.398 Run-time dependency bash-completion found: YES 2.11 00:02:42.398 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:42.398 Program cp found: YES (/usr/bin/cp) 00:02:42.398 Build targets in project: 3 00:02:42.398 00:02:42.398 xnvme 0.7.5 00:02:42.398 00:02:42.398 Subprojects 00:02:42.398 spdk : NO Feature 'with-spdk' disabled 00:02:42.398 00:02:42.398 User defined options 00:02:42.398 examples : false 00:02:42.398 tests : false 00:02:42.398 tools : false 00:02:42.398 with-libaio : enabled 00:02:42.398 with-liburing: enabled 00:02:42.398 with-libvfn : disabled 00:02:42.398 with-spdk : disabled 00:02:42.398 00:02:42.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.398 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:42.398 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:42.655 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:42.655 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:42.655 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:42.655 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:42.655 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:42.655 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:42.655 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:42.655 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:42.655 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:42.655 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:42.655 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:42.655 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:42.655 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:42.655 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:42.655 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:42.655 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:42.655 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:42.655 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:42.655 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:42.655 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:42.655 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:42.655 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:42.655 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:42.913 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:42.913 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:42.913 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:42.913 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:42.913 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:42.913 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:42.913 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:42.913 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:42.913 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:42.913 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:42.913 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:42.913 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:42.913 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:42.913 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:42.913 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:42.913 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:42.913 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:42.913 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:42.913 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:42.913 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:42.913 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:42.913 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:42.913 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:42.913 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:42.913 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:42.913 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:42.913 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:42.913 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:42.913 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:42.913 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:42.913 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:42.913 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:42.913 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:42.913 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:42.913 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:42.913 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:43.171 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:43.171 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:43.171 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:43.171 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:43.171 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:43.171 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:43.171 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:43.171 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:43.171 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:43.171 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:43.171 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:43.171 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:43.171 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:43.429 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:43.429 [75/76] Linking static target lib/libxnvme.a 00:02:43.429 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:43.429 INFO: autodetecting backend as ninja 00:02:43.429 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:43.429 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:49.986 The Meson build system 00:02:49.986 Version: 1.5.0 00:02:49.986 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:49.986 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:49.986 Build type: native build 00:02:49.986 Program cat found: YES (/usr/bin/cat) 00:02:49.986 Project name: DPDK 00:02:49.986 Project version: 24.03.0 00:02:49.986 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.986 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.986 Host machine cpu family: x86_64 00:02:49.986 Host machine cpu: x86_64 00:02:49.986 Message: ## Building in Developer Mode ## 00:02:49.986 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:49.986 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:49.986 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:49.986 Program python3 found: YES (/usr/bin/python3) 00:02:49.986 Program cat found: YES (/usr/bin/cat) 00:02:49.986 Compiler for C supports arguments -march=native: YES 00:02:49.986 Checking for size of "void *" : 8 00:02:49.986 Checking for size of "void *" : 8 (cached) 00:02:49.986 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:49.986 Library m found: YES 00:02:49.986 Library numa found: YES 00:02:49.986 Has header "numaif.h" : YES 00:02:49.986 Library fdt found: NO 00:02:49.986 Library execinfo found: NO 00:02:49.986 Has header "execinfo.h" : YES 00:02:49.986 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.986 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:49.986 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:49.986 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:49.986 Run-time dependency openssl found: YES 3.1.1 00:02:49.986 Run-time dependency libpcap found: YES 1.10.4 00:02:49.987 Has header "pcap.h" with dependency libpcap: YES 00:02:49.987 Compiler for C supports arguments -Wcast-qual: YES 00:02:49.987 Compiler for C supports arguments -Wdeprecated: YES 00:02:49.987 Compiler for C supports arguments -Wformat: YES 00:02:49.987 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:49.987 Compiler for C supports arguments -Wformat-security: NO 00:02:49.987 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.987 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:49.987 Compiler for C supports arguments -Wnested-externs: YES 00:02:49.987 Compiler for C supports arguments -Wold-style-definition: YES 00:02:49.987 Compiler for C supports arguments -Wpointer-arith: YES 00:02:49.987 Compiler for C supports arguments -Wsign-compare: YES 00:02:49.987 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:49.987 Compiler for C supports arguments -Wundef: YES 00:02:49.987 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.987 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:49.987 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:49.987 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.987 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:49.987 Program objdump found: YES (/usr/bin/objdump) 00:02:49.987 Compiler for C supports arguments -mavx512f: YES 00:02:49.987 Checking if "AVX512 checking" compiles: YES 00:02:49.987 Fetching value of define "__SSE4_2__" : 1 00:02:49.987 Fetching value of define "__AES__" : 1 00:02:49.987 Fetching value of define "__AVX__" : 1 00:02:49.987 Fetching value of define "__AVX2__" : 1 00:02:49.987 Fetching value of define "__AVX512BW__" : 1 00:02:49.987 Fetching value of define "__AVX512CD__" : 1 00:02:49.987 Fetching value of define "__AVX512DQ__" : 1 00:02:49.987 Fetching value of define "__AVX512F__" : 1 00:02:49.987 Fetching value of define "__AVX512VL__" : 1 00:02:49.987 Fetching value of define "__PCLMUL__" : 1 00:02:49.987 Fetching value of define "__RDRND__" : 1 00:02:49.987 Fetching value of define "__RDSEED__" : 1 00:02:49.987 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:49.987 Fetching value of define "__znver1__" : (undefined) 00:02:49.987 Fetching value of define "__znver2__" : (undefined) 00:02:49.987 Fetching value of define "__znver3__" : (undefined) 00:02:49.987 Fetching value of define "__znver4__" : (undefined) 00:02:49.987 Library asan found: YES 00:02:49.987 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:49.987 Message: lib/log: Defining dependency "log" 00:02:49.987 Message: lib/kvargs: Defining dependency "kvargs" 00:02:49.987 Message: lib/telemetry: Defining dependency "telemetry" 00:02:49.987 Library rt found: YES 00:02:49.987 Checking for function "getentropy" : NO 00:02:49.987 Message: lib/eal: Defining dependency "eal" 00:02:49.987 Message: lib/ring: Defining dependency "ring" 00:02:49.987 Message: lib/rcu: Defining dependency "rcu" 00:02:49.987 Message: lib/mempool: Defining dependency "mempool" 00:02:49.987 Message: lib/mbuf: Defining dependency "mbuf" 00:02:49.987 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:49.987 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:49.987 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:49.987 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:49.987 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:49.987 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:49.987 Compiler for C supports arguments -mpclmul: YES 00:02:49.987 Compiler for C supports arguments -maes: YES 00:02:49.987 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.987 Compiler for C supports arguments -mavx512bw: YES 00:02:49.987 Compiler for C supports arguments -mavx512dq: YES 00:02:49.987 Compiler for C supports arguments -mavx512vl: YES 00:02:49.987 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:49.987 Compiler for C supports arguments -mavx2: YES 00:02:49.987 Compiler for C supports arguments -mavx: YES 00:02:49.987 Message: lib/net: Defining dependency "net" 00:02:49.987 Message: lib/meter: Defining dependency "meter" 00:02:49.987 Message: lib/ethdev: Defining dependency "ethdev" 00:02:49.987 Message: lib/pci: Defining dependency "pci" 00:02:49.987 Message: lib/cmdline: Defining dependency "cmdline" 00:02:49.987 Message: lib/hash: Defining dependency "hash" 00:02:49.987 Message: lib/timer: Defining dependency "timer" 00:02:49.987 Message: lib/compressdev: Defining dependency "compressdev" 00:02:49.987 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:49.987 Message: lib/dmadev: Defining dependency "dmadev" 00:02:49.987 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:49.987 Message: lib/power: Defining dependency "power" 00:02:49.987 Message: lib/reorder: Defining dependency "reorder" 00:02:49.987 Message: lib/security: Defining dependency "security" 00:02:49.987 Has header "linux/userfaultfd.h" : YES 00:02:49.987 Has header "linux/vduse.h" : YES 00:02:49.987 Message: lib/vhost: Defining dependency "vhost" 00:02:49.987 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:49.987 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:49.987 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:49.987 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.987 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:49.987 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:49.987 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:49.987 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:49.987 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:49.987 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:49.987 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.987 Configuring doxy-api-html.conf using configuration 00:02:49.987 Configuring doxy-api-man.conf using configuration 00:02:49.987 Program mandb found: YES (/usr/bin/mandb) 00:02:49.987 Program sphinx-build found: NO 00:02:49.987 Configuring rte_build_config.h using configuration 00:02:49.987 Message: 00:02:49.987 ================= 00:02:49.987 Applications Enabled 00:02:49.987 ================= 00:02:49.987 00:02:49.987 apps: 00:02:49.987 00:02:49.987 00:02:49.987 Message: 00:02:49.987 ================= 00:02:49.987 Libraries Enabled 00:02:49.987 ================= 00:02:49.987 00:02:49.987 libs: 00:02:49.987 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.987 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:49.987 cryptodev, dmadev, power, reorder, security, vhost, 00:02:49.987 00:02:49.987 Message: 00:02:49.987 =============== 00:02:49.987 Drivers Enabled 00:02:49.987 =============== 00:02:49.987 00:02:49.987 common: 00:02:49.987 00:02:49.987 bus: 00:02:49.987 pci, vdev, 00:02:49.987 mempool: 00:02:49.987 ring, 00:02:49.987 dma: 00:02:49.987 00:02:49.987 net: 00:02:49.987 00:02:49.987 crypto: 00:02:49.987 00:02:49.987 compress: 00:02:49.987 00:02:49.987 vdpa: 00:02:49.987 00:02:49.987 00:02:49.987 Message: 00:02:49.987 ================= 00:02:49.987 Content Skipped 00:02:49.987 ================= 00:02:49.987 00:02:49.987 apps: 00:02:49.987 dumpcap: explicitly disabled via build config 00:02:49.987 graph: explicitly disabled via build config 00:02:49.987 pdump: explicitly disabled via build config 00:02:49.987 proc-info: explicitly disabled via build config 00:02:49.987 test-acl: explicitly disabled via build config 00:02:49.987 test-bbdev: explicitly disabled via build config 00:02:49.987 test-cmdline: explicitly disabled via build config 00:02:49.987 test-compress-perf: explicitly disabled via build config 00:02:49.987 test-crypto-perf: explicitly disabled via build config 00:02:49.987 test-dma-perf: explicitly disabled via build config 00:02:49.987 test-eventdev: explicitly disabled via build config 00:02:49.987 test-fib: explicitly disabled via build config 00:02:49.987 test-flow-perf: explicitly disabled via build config 00:02:49.987 test-gpudev: explicitly disabled via build config 00:02:49.987 test-mldev: explicitly disabled via build config 00:02:49.987 test-pipeline: explicitly disabled via build config 00:02:49.987 test-pmd: explicitly disabled via build config 00:02:49.987 test-regex: explicitly disabled via build config 00:02:49.987 test-sad: explicitly disabled via build config 00:02:49.987 test-security-perf: explicitly disabled via build config 00:02:49.987 00:02:49.987 libs: 00:02:49.987 argparse: explicitly disabled via build config 00:02:49.987 metrics: explicitly disabled via build config 00:02:49.987 acl: explicitly disabled via build config 00:02:49.987 bbdev: explicitly disabled via build config 00:02:49.987 bitratestats: explicitly disabled via build config 00:02:49.987 bpf: explicitly disabled via build config 00:02:49.987 cfgfile: explicitly disabled via build config 00:02:49.987 distributor: explicitly disabled via build config 00:02:49.987 efd: explicitly disabled via build config 00:02:49.987 eventdev: explicitly disabled via build config 00:02:49.987 dispatcher: explicitly disabled via build config 00:02:49.987 gpudev: explicitly disabled via build config 00:02:49.987 gro: explicitly disabled via build config 00:02:49.987 gso: explicitly disabled via build config 00:02:49.987 ip_frag: explicitly disabled via build config 00:02:49.987 jobstats: explicitly disabled via build config 00:02:49.987 latencystats: explicitly disabled via build config 00:02:49.987 lpm: explicitly disabled via build config 00:02:49.987 member: explicitly disabled via build config 00:02:49.987 pcapng: explicitly disabled via build config 00:02:49.987 rawdev: explicitly disabled via build config 00:02:49.987 regexdev: explicitly disabled via build config 00:02:49.987 mldev: explicitly disabled via build config 00:02:49.987 rib: explicitly disabled via build config 00:02:49.987 sched: explicitly disabled via build config 00:02:49.987 stack: explicitly disabled via build config 00:02:49.987 ipsec: explicitly disabled via build config 00:02:49.987 pdcp: explicitly disabled via build config 00:02:49.987 fib: explicitly disabled via build config 00:02:49.987 port: explicitly disabled via build config 00:02:49.987 pdump: explicitly disabled via build config 00:02:49.987 table: explicitly disabled via build config 00:02:49.987 pipeline: explicitly disabled via build config 00:02:49.987 graph: explicitly disabled via build config 00:02:49.988 node: explicitly disabled via build config 00:02:49.988 00:02:49.988 drivers: 00:02:49.988 common/cpt: not in enabled drivers build config 00:02:49.988 common/dpaax: not in enabled drivers build config 00:02:49.988 common/iavf: not in enabled drivers build config 00:02:49.988 common/idpf: not in enabled drivers build config 00:02:49.988 common/ionic: not in enabled drivers build config 00:02:49.988 common/mvep: not in enabled drivers build config 00:02:49.988 common/octeontx: not in enabled drivers build config 00:02:49.988 bus/auxiliary: not in enabled drivers build config 00:02:49.988 bus/cdx: not in enabled drivers build config 00:02:49.988 bus/dpaa: not in enabled drivers build config 00:02:49.988 bus/fslmc: not in enabled drivers build config 00:02:49.988 bus/ifpga: not in enabled drivers build config 00:02:49.988 bus/platform: not in enabled drivers build config 00:02:49.988 bus/uacce: not in enabled drivers build config 00:02:49.988 bus/vmbus: not in enabled drivers build config 00:02:49.988 common/cnxk: not in enabled drivers build config 00:02:49.988 common/mlx5: not in enabled drivers build config 00:02:49.988 common/nfp: not in enabled drivers build config 00:02:49.988 common/nitrox: not in enabled drivers build config 00:02:49.988 common/qat: not in enabled drivers build config 00:02:49.988 common/sfc_efx: not in enabled drivers build config 00:02:49.988 mempool/bucket: not in enabled drivers build config 00:02:49.988 mempool/cnxk: not in enabled drivers build config 00:02:49.988 mempool/dpaa: not in enabled drivers build config 00:02:49.988 mempool/dpaa2: not in enabled drivers build config 00:02:49.988 mempool/octeontx: not in enabled drivers build config 00:02:49.988 mempool/stack: not in enabled drivers build config 00:02:49.988 dma/cnxk: not in enabled drivers build config 00:02:49.988 dma/dpaa: not in enabled drivers build config 00:02:49.988 dma/dpaa2: not in enabled drivers build config 00:02:49.988 dma/hisilicon: not in enabled drivers build config 00:02:49.988 dma/idxd: not in enabled drivers build config 00:02:49.988 dma/ioat: not in enabled drivers build config 00:02:49.988 dma/skeleton: not in enabled drivers build config 00:02:49.988 net/af_packet: not in enabled drivers build config 00:02:49.988 net/af_xdp: not in enabled drivers build config 00:02:49.988 net/ark: not in enabled drivers build config 00:02:49.988 net/atlantic: not in enabled drivers build config 00:02:49.988 net/avp: not in enabled drivers build config 00:02:49.988 net/axgbe: not in enabled drivers build config 00:02:49.988 net/bnx2x: not in enabled drivers build config 00:02:49.988 net/bnxt: not in enabled drivers build config 00:02:49.988 net/bonding: not in enabled drivers build config 00:02:49.988 net/cnxk: not in enabled drivers build config 00:02:49.988 net/cpfl: not in enabled drivers build config 00:02:49.988 net/cxgbe: not in enabled drivers build config 00:02:49.988 net/dpaa: not in enabled drivers build config 00:02:49.988 net/dpaa2: not in enabled drivers build config 00:02:49.988 net/e1000: not in enabled drivers build config 00:02:49.988 net/ena: not in enabled drivers build config 00:02:49.988 net/enetc: not in enabled drivers build config 00:02:49.988 net/enetfec: not in enabled drivers build config 00:02:49.988 net/enic: not in enabled drivers build config 00:02:49.988 net/failsafe: not in enabled drivers build config 00:02:49.988 net/fm10k: not in enabled drivers build config 00:02:49.988 net/gve: not in enabled drivers build config 00:02:49.988 net/hinic: not in enabled drivers build config 00:02:49.988 net/hns3: not in enabled drivers build config 00:02:49.988 net/i40e: not in enabled drivers build config 00:02:49.988 net/iavf: not in enabled drivers build config 00:02:49.988 net/ice: not in enabled drivers build config 00:02:49.988 net/idpf: not in enabled drivers build config 00:02:49.988 net/igc: not in enabled drivers build config 00:02:49.988 net/ionic: not in enabled drivers build config 00:02:49.988 net/ipn3ke: not in enabled drivers build config 00:02:49.988 net/ixgbe: not in enabled drivers build config 00:02:49.988 net/mana: not in enabled drivers build config 00:02:49.988 net/memif: not in enabled drivers build config 00:02:49.988 net/mlx4: not in enabled drivers build config 00:02:49.988 net/mlx5: not in enabled drivers build config 00:02:49.988 net/mvneta: not in enabled drivers build config 00:02:49.988 net/mvpp2: not in enabled drivers build config 00:02:49.988 net/netvsc: not in enabled drivers build config 00:02:49.988 net/nfb: not in enabled drivers build config 00:02:49.988 net/nfp: not in enabled drivers build config 00:02:49.988 net/ngbe: not in enabled drivers build config 00:02:49.988 net/null: not in enabled drivers build config 00:02:49.988 net/octeontx: not in enabled drivers build config 00:02:49.988 net/octeon_ep: not in enabled drivers build config 00:02:49.988 net/pcap: not in enabled drivers build config 00:02:49.988 net/pfe: not in enabled drivers build config 00:02:49.988 net/qede: not in enabled drivers build config 00:02:49.988 net/ring: not in enabled drivers build config 00:02:49.988 net/sfc: not in enabled drivers build config 00:02:49.988 net/softnic: not in enabled drivers build config 00:02:49.988 net/tap: not in enabled drivers build config 00:02:49.988 net/thunderx: not in enabled drivers build config 00:02:49.988 net/txgbe: not in enabled drivers build config 00:02:49.988 net/vdev_netvsc: not in enabled drivers build config 00:02:49.988 net/vhost: not in enabled drivers build config 00:02:49.988 net/virtio: not in enabled drivers build config 00:02:49.988 net/vmxnet3: not in enabled drivers build config 00:02:49.988 raw/*: missing internal dependency, "rawdev" 00:02:49.988 crypto/armv8: not in enabled drivers build config 00:02:49.988 crypto/bcmfs: not in enabled drivers build config 00:02:49.988 crypto/caam_jr: not in enabled drivers build config 00:02:49.988 crypto/ccp: not in enabled drivers build config 00:02:49.988 crypto/cnxk: not in enabled drivers build config 00:02:49.988 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.988 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.988 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.988 crypto/mlx5: not in enabled drivers build config 00:02:49.988 crypto/mvsam: not in enabled drivers build config 00:02:49.988 crypto/nitrox: not in enabled drivers build config 00:02:49.988 crypto/null: not in enabled drivers build config 00:02:49.988 crypto/octeontx: not in enabled drivers build config 00:02:49.988 crypto/openssl: not in enabled drivers build config 00:02:49.988 crypto/scheduler: not in enabled drivers build config 00:02:49.988 crypto/uadk: not in enabled drivers build config 00:02:49.988 crypto/virtio: not in enabled drivers build config 00:02:49.988 compress/isal: not in enabled drivers build config 00:02:49.988 compress/mlx5: not in enabled drivers build config 00:02:49.988 compress/nitrox: not in enabled drivers build config 00:02:49.988 compress/octeontx: not in enabled drivers build config 00:02:49.988 compress/zlib: not in enabled drivers build config 00:02:49.988 regex/*: missing internal dependency, "regexdev" 00:02:49.988 ml/*: missing internal dependency, "mldev" 00:02:49.988 vdpa/ifc: not in enabled drivers build config 00:02:49.988 vdpa/mlx5: not in enabled drivers build config 00:02:49.988 vdpa/nfp: not in enabled drivers build config 00:02:49.988 vdpa/sfc: not in enabled drivers build config 00:02:49.988 event/*: missing internal dependency, "eventdev" 00:02:49.988 baseband/*: missing internal dependency, "bbdev" 00:02:49.988 gpu/*: missing internal dependency, "gpudev" 00:02:49.988 00:02:49.988 00:02:49.988 Build targets in project: 84 00:02:49.988 00:02:49.988 DPDK 24.03.0 00:02:49.988 00:02:49.988 User defined options 00:02:49.988 buildtype : debug 00:02:49.988 default_library : shared 00:02:49.988 libdir : lib 00:02:49.988 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:49.988 b_sanitize : address 00:02:49.988 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:49.988 c_link_args : 00:02:49.988 cpu_instruction_set: native 00:02:49.988 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:49.988 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:49.988 enable_docs : false 00:02:49.988 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:49.988 enable_kmods : false 00:02:49.988 max_lcores : 128 00:02:49.988 tests : false 00:02:49.988 00:02:49.988 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.555 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.555 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.555 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.555 [3/267] Linking static target lib/librte_kvargs.a 00:02:50.555 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.555 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.555 [6/267] Linking static target lib/librte_log.a 00:02:50.555 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.814 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.814 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.814 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.814 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.814 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.814 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.814 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.814 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.814 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.073 [17/267] Linking static target lib/librte_telemetry.a 00:02:51.073 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.073 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.331 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.331 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.331 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.331 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.331 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.331 [25/267] Linking target lib/librte_log.so.24.1 00:02:51.331 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.331 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.331 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.590 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.590 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.590 [31/267] Linking target lib/librte_kvargs.so.24.1 00:02:51.590 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.590 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.590 [34/267] Linking target lib/librte_telemetry.so.24.1 00:02:51.590 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:51.848 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.848 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:51.848 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:51.848 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:51.848 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:51.848 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:51.848 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.848 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:51.848 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.848 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:51.848 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.106 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:52.106 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:52.106 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:52.365 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.365 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:52.365 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:52.365 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:52.365 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:52.365 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:52.365 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:52.623 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:52.623 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:52.623 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:52.623 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:52.623 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:52.623 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:52.623 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:52.623 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.623 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:52.623 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:52.881 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.881 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.881 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:53.145 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:53.145 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:53.145 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:53.145 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:53.145 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:53.145 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:53.145 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.145 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.412 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:53.412 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:53.412 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.412 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.412 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.669 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.669 [84/267] Linking static target lib/librte_ring.a 00:02:53.669 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.669 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.669 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.669 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.669 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.669 [90/267] Linking static target lib/librte_eal.a 00:02:53.669 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.669 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.927 [93/267] Linking static target lib/librte_mempool.a 00:02:53.927 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.927 [95/267] Linking static target lib/librte_rcu.a 00:02:53.927 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.927 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.927 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:54.184 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:54.184 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:54.184 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:54.184 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.185 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.185 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:54.185 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:54.442 [106/267] Linking static target lib/librte_net.a 00:02:54.442 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:54.442 [108/267] Linking static target lib/librte_meter.a 00:02:54.442 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.442 [110/267] Linking static target lib/librte_mbuf.a 00:02:54.442 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:54.442 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:54.700 [113/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.700 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.700 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:54.700 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:54.700 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.958 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:54.958 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:55.216 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:55.216 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:55.216 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.216 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:55.475 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:55.475 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:55.475 [126/267] Linking static target lib/librte_pci.a 00:02:55.475 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:55.475 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:55.475 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:55.475 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:55.475 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:55.475 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:55.733 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:55.733 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:55.733 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:55.733 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.733 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:55.733 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:55.733 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:55.733 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:55.733 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:55.733 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:55.733 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:55.733 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:55.992 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.992 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:55.992 [147/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:55.992 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:55.992 [149/267] Linking static target lib/librte_cmdline.a 00:02:55.992 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:56.250 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:56.250 [152/267] Linking static target lib/librte_timer.a 00:02:56.250 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:56.250 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.250 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:56.250 [156/267] Linking static target lib/librte_ethdev.a 00:02:56.508 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:56.508 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:56.508 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:56.508 [160/267] Linking static target lib/librte_compressdev.a 00:02:56.508 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:56.766 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:56.766 [163/267] Linking static target lib/librte_hash.a 00:02:56.766 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.766 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.766 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:56.766 [167/267] Linking static target lib/librte_dmadev.a 00:02:56.766 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.024 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:57.024 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.024 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:57.024 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:57.024 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.282 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.282 [175/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.282 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:57.282 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:57.282 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:57.282 [179/267] Linking static target lib/librte_cryptodev.a 00:02:57.282 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:57.282 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:57.282 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.540 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.540 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.540 [185/267] Linking static target lib/librte_power.a 00:02:57.863 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.863 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.863 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.863 [189/267] Linking static target lib/librte_reorder.a 00:02:57.863 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.863 [191/267] Linking static target lib/librte_security.a 00:02:57.863 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.121 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.121 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.379 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.379 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.379 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:58.637 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:58.637 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.637 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:58.637 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.894 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.894 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.894 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.894 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:59.151 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:59.151 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.151 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:59.151 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:59.151 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.151 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:59.151 [212/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.151 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:59.151 [214/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.408 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:59.408 [216/267] Linking static target drivers/librte_bus_pci.a 00:02:59.408 [217/267] Linking static target drivers/librte_bus_vdev.a 00:02:59.408 [218/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:59.408 [219/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.408 [220/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:59.408 [221/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:59.665 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:59.665 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.665 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.665 [225/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.665 [226/267] Linking static target drivers/librte_mempool_ring.a 00:03:00.229 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.183 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.183 [229/267] Linking target lib/librte_eal.so.24.1 00:03:01.183 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:01.440 [231/267] Linking target lib/librte_meter.so.24.1 00:03:01.440 [232/267] Linking target lib/librte_pci.so.24.1 00:03:01.440 [233/267] Linking target lib/librte_dmadev.so.24.1 00:03:01.440 [234/267] Linking target lib/librte_ring.so.24.1 00:03:01.440 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:01.440 [236/267] Linking target lib/librte_timer.so.24.1 00:03:01.440 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:01.440 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:01.440 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:01.440 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:01.440 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:01.440 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:01.440 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:01.440 [244/267] Linking target lib/librte_rcu.so.24.1 00:03:01.697 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:01.698 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:01.698 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:01.698 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:01.698 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:01.698 [250/267] Linking target lib/librte_net.so.24.1 00:03:01.698 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:01.698 [252/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.956 [253/267] Linking target lib/librte_compressdev.so.24.1 00:03:01.956 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:03:01.956 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:01.956 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.956 [257/267] Linking target lib/librte_hash.so.24.1 00:03:01.956 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:01.956 [259/267] Linking target lib/librte_ethdev.so.24.1 00:03:01.956 [260/267] Linking target lib/librte_security.so.24.1 00:03:01.956 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:01.956 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:02.213 [263/267] Linking target lib/librte_power.so.24.1 00:03:03.143 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.143 [265/267] Linking static target lib/librte_vhost.a 00:03:04.514 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.514 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:04.514 INFO: autodetecting backend as ninja 00:03:04.514 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:19.376 CC lib/ut/ut.o 00:03:19.376 CC lib/ut_mock/mock.o 00:03:19.376 CC lib/log/log.o 00:03:19.376 CC lib/log/log_flags.o 00:03:19.376 CC lib/log/log_deprecated.o 00:03:19.637 LIB libspdk_ut_mock.a 00:03:19.637 LIB libspdk_ut.a 00:03:19.637 SO libspdk_ut_mock.so.6.0 00:03:19.637 SO libspdk_ut.so.2.0 00:03:19.637 LIB libspdk_log.a 00:03:19.637 SYMLINK libspdk_ut_mock.so 00:03:19.637 SYMLINK libspdk_ut.so 00:03:19.637 SO libspdk_log.so.7.1 00:03:19.637 SYMLINK libspdk_log.so 00:03:19.896 CC lib/dma/dma.o 00:03:19.896 CXX lib/trace_parser/trace.o 00:03:19.896 CC lib/ioat/ioat.o 00:03:19.896 CC lib/util/base64.o 00:03:19.896 CC lib/util/bit_array.o 00:03:19.896 CC lib/util/crc16.o 00:03:19.896 CC lib/util/cpuset.o 00:03:19.896 CC lib/util/crc32.o 00:03:19.896 CC lib/util/crc32c.o 00:03:19.896 CC lib/vfio_user/host/vfio_user_pci.o 00:03:19.896 CC lib/util/crc32_ieee.o 00:03:20.154 CC lib/util/crc64.o 00:03:20.154 CC lib/util/dif.o 00:03:20.154 LIB libspdk_dma.a 00:03:20.154 CC lib/util/fd.o 00:03:20.154 CC lib/vfio_user/host/vfio_user.o 00:03:20.154 SO libspdk_dma.so.5.0 00:03:20.154 CC lib/util/fd_group.o 00:03:20.154 CC lib/util/file.o 00:03:20.154 SYMLINK libspdk_dma.so 00:03:20.154 CC lib/util/hexlify.o 00:03:20.154 CC lib/util/iov.o 00:03:20.154 LIB libspdk_ioat.a 00:03:20.154 CC lib/util/math.o 00:03:20.154 SO libspdk_ioat.so.7.0 00:03:20.154 CC lib/util/net.o 00:03:20.154 CC lib/util/pipe.o 00:03:20.154 CC lib/util/strerror_tls.o 00:03:20.154 SYMLINK libspdk_ioat.so 00:03:20.154 LIB libspdk_vfio_user.a 00:03:20.154 CC lib/util/string.o 00:03:20.154 SO libspdk_vfio_user.so.5.0 00:03:20.412 CC lib/util/uuid.o 00:03:20.412 CC lib/util/xor.o 00:03:20.412 CC lib/util/zipf.o 00:03:20.412 SYMLINK libspdk_vfio_user.so 00:03:20.412 CC lib/util/md5.o 00:03:20.669 LIB libspdk_util.a 00:03:20.669 SO libspdk_util.so.10.1 00:03:20.927 SYMLINK libspdk_util.so 00:03:20.927 LIB libspdk_trace_parser.a 00:03:20.927 SO libspdk_trace_parser.so.6.0 00:03:20.928 CC lib/json/json_parse.o 00:03:20.928 CC lib/json/json_util.o 00:03:20.928 CC lib/json/json_write.o 00:03:20.928 CC lib/idxd/idxd.o 00:03:20.928 CC lib/idxd/idxd_user.o 00:03:20.928 CC lib/env_dpdk/env.o 00:03:20.928 CC lib/rdma_utils/rdma_utils.o 00:03:20.928 CC lib/vmd/vmd.o 00:03:20.928 CC lib/conf/conf.o 00:03:20.928 SYMLINK libspdk_trace_parser.so 00:03:20.928 CC lib/vmd/led.o 00:03:21.187 CC lib/env_dpdk/memory.o 00:03:21.187 LIB libspdk_conf.a 00:03:21.187 CC lib/env_dpdk/pci.o 00:03:21.187 SO libspdk_conf.so.6.0 00:03:21.187 LIB libspdk_rdma_utils.a 00:03:21.187 CC lib/env_dpdk/init.o 00:03:21.187 SO libspdk_rdma_utils.so.1.0 00:03:21.187 SYMLINK libspdk_conf.so 00:03:21.187 LIB libspdk_json.a 00:03:21.187 CC lib/env_dpdk/threads.o 00:03:21.187 CC lib/env_dpdk/pci_ioat.o 00:03:21.187 SO libspdk_json.so.6.0 00:03:21.187 SYMLINK libspdk_rdma_utils.so 00:03:21.187 CC lib/env_dpdk/pci_virtio.o 00:03:21.187 SYMLINK libspdk_json.so 00:03:21.187 CC lib/env_dpdk/pci_vmd.o 00:03:21.445 CC lib/env_dpdk/pci_idxd.o 00:03:21.445 CC lib/env_dpdk/pci_event.o 00:03:21.445 CC lib/env_dpdk/sigbus_handler.o 00:03:21.445 CC lib/env_dpdk/pci_dpdk.o 00:03:21.445 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.445 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.445 CC lib/idxd/idxd_kernel.o 00:03:21.703 CC lib/rdma_provider/common.o 00:03:21.703 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.703 LIB libspdk_idxd.a 00:03:21.703 LIB libspdk_vmd.a 00:03:21.703 SO libspdk_idxd.so.12.1 00:03:21.703 SO libspdk_vmd.so.6.0 00:03:21.703 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.703 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.703 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.703 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.703 SYMLINK libspdk_vmd.so 00:03:21.703 SYMLINK libspdk_idxd.so 00:03:21.961 LIB libspdk_rdma_provider.a 00:03:21.961 SO libspdk_rdma_provider.so.7.0 00:03:21.961 LIB libspdk_jsonrpc.a 00:03:21.961 SYMLINK libspdk_rdma_provider.so 00:03:21.961 SO libspdk_jsonrpc.so.6.0 00:03:21.961 SYMLINK libspdk_jsonrpc.so 00:03:22.219 CC lib/rpc/rpc.o 00:03:22.477 LIB libspdk_rpc.a 00:03:22.477 LIB libspdk_env_dpdk.a 00:03:22.477 SO libspdk_rpc.so.6.0 00:03:22.477 SYMLINK libspdk_rpc.so 00:03:22.477 SO libspdk_env_dpdk.so.15.1 00:03:22.735 SYMLINK libspdk_env_dpdk.so 00:03:22.735 CC lib/keyring/keyring_rpc.o 00:03:22.735 CC lib/keyring/keyring.o 00:03:22.735 CC lib/notify/notify.o 00:03:22.735 CC lib/notify/notify_rpc.o 00:03:22.735 CC lib/trace/trace_flags.o 00:03:22.735 CC lib/trace/trace.o 00:03:22.735 CC lib/trace/trace_rpc.o 00:03:22.735 LIB libspdk_notify.a 00:03:22.992 SO libspdk_notify.so.6.0 00:03:22.992 LIB libspdk_keyring.a 00:03:22.992 SO libspdk_keyring.so.2.0 00:03:22.992 SYMLINK libspdk_notify.so 00:03:22.992 LIB libspdk_trace.a 00:03:22.992 SO libspdk_trace.so.11.0 00:03:22.992 SYMLINK libspdk_keyring.so 00:03:22.992 SYMLINK libspdk_trace.so 00:03:23.250 CC lib/thread/thread.o 00:03:23.250 CC lib/thread/iobuf.o 00:03:23.250 CC lib/sock/sock_rpc.o 00:03:23.250 CC lib/sock/sock.o 00:03:23.508 LIB libspdk_sock.a 00:03:23.508 SO libspdk_sock.so.10.0 00:03:23.508 SYMLINK libspdk_sock.so 00:03:23.767 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.767 CC lib/nvme/nvme_ctrlr.o 00:03:23.767 CC lib/nvme/nvme_fabric.o 00:03:23.767 CC lib/nvme/nvme_pcie.o 00:03:23.767 CC lib/nvme/nvme_ns.o 00:03:23.767 CC lib/nvme/nvme_ns_cmd.o 00:03:23.767 CC lib/nvme/nvme_pcie_common.o 00:03:23.767 CC lib/nvme/nvme.o 00:03:23.767 CC lib/nvme/nvme_qpair.o 00:03:24.334 CC lib/nvme/nvme_quirks.o 00:03:24.334 CC lib/nvme/nvme_transport.o 00:03:24.595 LIB libspdk_thread.a 00:03:24.595 CC lib/nvme/nvme_discovery.o 00:03:24.595 SO libspdk_thread.so.11.0 00:03:24.595 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.595 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.595 SYMLINK libspdk_thread.so 00:03:24.595 CC lib/nvme/nvme_tcp.o 00:03:24.595 CC lib/nvme/nvme_opal.o 00:03:24.595 CC lib/nvme/nvme_io_msg.o 00:03:24.595 CC lib/nvme/nvme_poll_group.o 00:03:24.855 CC lib/nvme/nvme_zns.o 00:03:24.855 CC lib/nvme/nvme_stubs.o 00:03:24.855 CC lib/nvme/nvme_auth.o 00:03:24.855 CC lib/nvme/nvme_cuse.o 00:03:25.113 CC lib/nvme/nvme_rdma.o 00:03:25.113 CC lib/accel/accel.o 00:03:25.371 CC lib/init/json_config.o 00:03:25.371 CC lib/blob/blobstore.o 00:03:25.371 CC lib/virtio/virtio.o 00:03:25.371 CC lib/fsdev/fsdev.o 00:03:25.629 CC lib/init/subsystem.o 00:03:25.629 CC lib/init/subsystem_rpc.o 00:03:25.629 CC lib/blob/request.o 00:03:25.629 CC lib/virtio/virtio_vhost_user.o 00:03:25.629 CC lib/accel/accel_rpc.o 00:03:25.629 CC lib/init/rpc.o 00:03:25.887 CC lib/fsdev/fsdev_io.o 00:03:25.887 CC lib/fsdev/fsdev_rpc.o 00:03:25.887 LIB libspdk_init.a 00:03:25.887 SO libspdk_init.so.6.0 00:03:25.887 SYMLINK libspdk_init.so 00:03:25.887 CC lib/virtio/virtio_vfio_user.o 00:03:25.887 CC lib/accel/accel_sw.o 00:03:25.887 CC lib/blob/zeroes.o 00:03:25.887 CC lib/blob/blob_bs_dev.o 00:03:25.887 CC lib/virtio/virtio_pci.o 00:03:26.145 LIB libspdk_fsdev.a 00:03:26.145 SO libspdk_fsdev.so.2.0 00:03:26.145 SYMLINK libspdk_fsdev.so 00:03:26.145 LIB libspdk_virtio.a 00:03:26.145 CC lib/event/app.o 00:03:26.145 CC lib/event/reactor.o 00:03:26.145 CC lib/event/log_rpc.o 00:03:26.145 CC lib/event/scheduler_static.o 00:03:26.145 CC lib/event/app_rpc.o 00:03:26.145 SO libspdk_virtio.so.7.0 00:03:26.145 LIB libspdk_nvme.a 00:03:26.145 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:26.403 SYMLINK libspdk_virtio.so 00:03:26.403 LIB libspdk_accel.a 00:03:26.403 SO libspdk_accel.so.16.0 00:03:26.403 SO libspdk_nvme.so.15.0 00:03:26.403 SYMLINK libspdk_accel.so 00:03:26.661 SYMLINK libspdk_nvme.so 00:03:26.661 LIB libspdk_event.a 00:03:26.661 CC lib/bdev/bdev.o 00:03:26.661 CC lib/bdev/part.o 00:03:26.661 CC lib/bdev/bdev_zone.o 00:03:26.661 CC lib/bdev/bdev_rpc.o 00:03:26.661 CC lib/bdev/scsi_nvme.o 00:03:26.661 SO libspdk_event.so.14.0 00:03:26.661 SYMLINK libspdk_event.so 00:03:26.945 LIB libspdk_fuse_dispatcher.a 00:03:26.945 SO libspdk_fuse_dispatcher.so.1.0 00:03:26.945 SYMLINK libspdk_fuse_dispatcher.so 00:03:27.882 LIB libspdk_blob.a 00:03:28.138 SO libspdk_blob.so.11.0 00:03:28.138 SYMLINK libspdk_blob.so 00:03:28.395 CC lib/blobfs/blobfs.o 00:03:28.396 CC lib/blobfs/tree.o 00:03:28.396 CC lib/lvol/lvol.o 00:03:28.959 LIB libspdk_blobfs.a 00:03:29.215 SO libspdk_blobfs.so.10.0 00:03:29.215 SYMLINK libspdk_blobfs.so 00:03:29.215 LIB libspdk_lvol.a 00:03:29.215 SO libspdk_lvol.so.10.0 00:03:29.215 SYMLINK libspdk_lvol.so 00:03:29.215 LIB libspdk_bdev.a 00:03:29.473 SO libspdk_bdev.so.17.0 00:03:29.473 SYMLINK libspdk_bdev.so 00:03:29.730 CC lib/nbd/nbd.o 00:03:29.730 CC lib/scsi/lun.o 00:03:29.730 CC lib/scsi/port.o 00:03:29.730 CC lib/scsi/scsi.o 00:03:29.730 CC lib/nbd/nbd_rpc.o 00:03:29.730 CC lib/scsi/dev.o 00:03:29.730 CC lib/scsi/scsi_bdev.o 00:03:29.730 CC lib/nvmf/ctrlr.o 00:03:29.730 CC lib/ftl/ftl_core.o 00:03:29.730 CC lib/ublk/ublk.o 00:03:29.731 CC lib/scsi/scsi_pr.o 00:03:29.731 CC lib/scsi/scsi_rpc.o 00:03:29.731 CC lib/nvmf/ctrlr_discovery.o 00:03:29.731 CC lib/nvmf/ctrlr_bdev.o 00:03:29.731 CC lib/nvmf/subsystem.o 00:03:29.731 CC lib/nvmf/nvmf.o 00:03:29.988 CC lib/ftl/ftl_init.o 00:03:29.988 LIB libspdk_nbd.a 00:03:29.988 SO libspdk_nbd.so.7.0 00:03:29.988 CC lib/nvmf/nvmf_rpc.o 00:03:29.988 CC lib/scsi/task.o 00:03:29.988 SYMLINK libspdk_nbd.so 00:03:29.988 CC lib/nvmf/transport.o 00:03:29.988 CC lib/ftl/ftl_layout.o 00:03:30.245 CC lib/ftl/ftl_debug.o 00:03:30.245 LIB libspdk_scsi.a 00:03:30.245 CC lib/ublk/ublk_rpc.o 00:03:30.245 SO libspdk_scsi.so.9.0 00:03:30.245 SYMLINK libspdk_scsi.so 00:03:30.245 CC lib/ftl/ftl_io.o 00:03:30.245 LIB libspdk_ublk.a 00:03:30.245 CC lib/ftl/ftl_sb.o 00:03:30.245 SO libspdk_ublk.so.3.0 00:03:30.245 CC lib/ftl/ftl_l2p.o 00:03:30.502 SYMLINK libspdk_ublk.so 00:03:30.502 CC lib/nvmf/tcp.o 00:03:30.502 CC lib/ftl/ftl_l2p_flat.o 00:03:30.502 CC lib/ftl/ftl_nv_cache.o 00:03:30.502 CC lib/iscsi/conn.o 00:03:30.502 CC lib/ftl/ftl_band.o 00:03:30.502 CC lib/nvmf/stubs.o 00:03:30.502 CC lib/ftl/ftl_band_ops.o 00:03:30.759 CC lib/nvmf/mdns_server.o 00:03:30.759 CC lib/vhost/vhost.o 00:03:30.759 CC lib/vhost/vhost_rpc.o 00:03:30.759 CC lib/nvmf/rdma.o 00:03:30.759 CC lib/nvmf/auth.o 00:03:31.016 CC lib/iscsi/init_grp.o 00:03:31.016 CC lib/iscsi/iscsi.o 00:03:31.016 CC lib/iscsi/param.o 00:03:31.016 CC lib/vhost/vhost_scsi.o 00:03:31.016 CC lib/iscsi/portal_grp.o 00:03:31.273 CC lib/vhost/vhost_blk.o 00:03:31.273 CC lib/ftl/ftl_writer.o 00:03:31.531 CC lib/iscsi/tgt_node.o 00:03:31.531 CC lib/iscsi/iscsi_subsystem.o 00:03:31.531 CC lib/vhost/rte_vhost_user.o 00:03:31.531 CC lib/ftl/ftl_rq.o 00:03:31.531 CC lib/ftl/ftl_reloc.o 00:03:31.531 CC lib/ftl/ftl_l2p_cache.o 00:03:31.788 CC lib/ftl/ftl_p2l.o 00:03:31.788 CC lib/ftl/ftl_p2l_log.o 00:03:31.788 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.788 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.788 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:32.045 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:32.046 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:32.046 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:32.046 CC lib/iscsi/iscsi_rpc.o 00:03:32.303 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:32.303 CC lib/ftl/utils/ftl_conf.o 00:03:32.303 CC lib/ftl/utils/ftl_md.o 00:03:32.303 CC lib/iscsi/task.o 00:03:32.303 CC lib/ftl/utils/ftl_mempool.o 00:03:32.303 CC lib/ftl/utils/ftl_bitmap.o 00:03:32.303 CC lib/ftl/utils/ftl_property.o 00:03:32.303 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.303 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.303 LIB libspdk_vhost.a 00:03:32.303 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.303 SO libspdk_vhost.so.8.0 00:03:32.303 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.561 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.561 LIB libspdk_iscsi.a 00:03:32.561 SYMLINK libspdk_vhost.so 00:03:32.561 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.561 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:32.561 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:32.561 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:32.561 SO libspdk_iscsi.so.8.0 00:03:32.561 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:32.561 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:32.561 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:32.561 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:32.561 CC lib/ftl/base/ftl_base_dev.o 00:03:32.561 SYMLINK libspdk_iscsi.so 00:03:32.561 CC lib/ftl/base/ftl_base_bdev.o 00:03:32.561 CC lib/ftl/ftl_trace.o 00:03:32.820 LIB libspdk_ftl.a 00:03:33.081 SO libspdk_ftl.so.9.0 00:03:33.081 LIB libspdk_nvmf.a 00:03:33.081 SO libspdk_nvmf.so.20.0 00:03:33.342 SYMLINK libspdk_ftl.so 00:03:33.342 SYMLINK libspdk_nvmf.so 00:03:33.604 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.604 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.604 CC module/accel/dsa/accel_dsa.o 00:03:33.604 CC module/keyring/file/keyring.o 00:03:33.604 CC module/fsdev/aio/fsdev_aio.o 00:03:33.604 CC module/accel/error/accel_error.o 00:03:33.604 CC module/accel/ioat/accel_ioat.o 00:03:33.604 CC module/blob/bdev/blob_bdev.o 00:03:33.604 CC module/accel/iaa/accel_iaa.o 00:03:33.604 CC module/sock/posix/posix.o 00:03:33.604 LIB libspdk_env_dpdk_rpc.a 00:03:33.899 SO libspdk_env_dpdk_rpc.so.6.0 00:03:33.899 SYMLINK libspdk_env_dpdk_rpc.so 00:03:33.899 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.899 LIB libspdk_scheduler_dynamic.a 00:03:33.899 CC module/keyring/file/keyring_rpc.o 00:03:33.899 CC module/accel/error/accel_error_rpc.o 00:03:33.899 SO libspdk_scheduler_dynamic.so.4.0 00:03:33.899 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.899 SYMLINK libspdk_scheduler_dynamic.so 00:03:33.899 LIB libspdk_blob_bdev.a 00:03:33.899 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.899 SO libspdk_blob_bdev.so.11.0 00:03:33.899 LIB libspdk_keyring_file.a 00:03:33.899 LIB libspdk_accel_error.a 00:03:33.899 SO libspdk_keyring_file.so.2.0 00:03:33.899 LIB libspdk_accel_dsa.a 00:03:33.899 LIB libspdk_accel_iaa.a 00:03:33.899 SO libspdk_accel_dsa.so.5.0 00:03:33.899 SO libspdk_accel_error.so.2.0 00:03:33.899 SO libspdk_accel_iaa.so.3.0 00:03:33.899 SYMLINK libspdk_blob_bdev.so 00:03:33.899 SYMLINK libspdk_keyring_file.so 00:03:33.899 LIB libspdk_accel_ioat.a 00:03:34.181 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:34.181 SYMLINK libspdk_accel_dsa.so 00:03:34.181 SYMLINK libspdk_accel_iaa.so 00:03:34.181 SYMLINK libspdk_accel_error.so 00:03:34.181 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:34.181 CC module/fsdev/aio/linux_aio_mgr.o 00:03:34.181 SO libspdk_accel_ioat.so.6.0 00:03:34.181 CC module/scheduler/gscheduler/gscheduler.o 00:03:34.181 SYMLINK libspdk_accel_ioat.so 00:03:34.181 CC module/keyring/linux/keyring.o 00:03:34.181 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.181 CC module/keyring/linux/keyring_rpc.o 00:03:34.181 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:34.181 LIB libspdk_scheduler_gscheduler.a 00:03:34.181 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.181 CC module/bdev/delay/vbdev_delay.o 00:03:34.181 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.181 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.181 CC module/bdev/error/vbdev_error.o 00:03:34.181 LIB libspdk_fsdev_aio.a 00:03:34.181 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.181 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.181 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.181 LIB libspdk_keyring_linux.a 00:03:34.181 SO libspdk_fsdev_aio.so.1.0 00:03:34.181 SO libspdk_keyring_linux.so.1.0 00:03:34.181 LIB libspdk_sock_posix.a 00:03:34.181 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.181 SYMLINK libspdk_fsdev_aio.so 00:03:34.181 SO libspdk_sock_posix.so.6.0 00:03:34.181 SYMLINK libspdk_keyring_linux.so 00:03:34.439 CC module/bdev/gpt/gpt.o 00:03:34.439 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.439 SYMLINK libspdk_sock_posix.so 00:03:34.439 LIB libspdk_blobfs_bdev.a 00:03:34.439 SO libspdk_blobfs_bdev.so.6.0 00:03:34.439 LIB libspdk_bdev_error.a 00:03:34.439 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.439 CC module/bdev/malloc/bdev_malloc.o 00:03:34.439 SO libspdk_bdev_error.so.6.0 00:03:34.439 SYMLINK libspdk_blobfs_bdev.so 00:03:34.439 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.439 LIB libspdk_bdev_delay.a 00:03:34.439 SYMLINK libspdk_bdev_error.so 00:03:34.439 SO libspdk_bdev_delay.so.6.0 00:03:34.439 CC module/bdev/nvme/bdev_nvme.o 00:03:34.439 CC module/bdev/null/bdev_null.o 00:03:34.439 CC module/bdev/null/bdev_null_rpc.o 00:03:34.439 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.439 SYMLINK libspdk_bdev_delay.so 00:03:34.439 LIB libspdk_bdev_gpt.a 00:03:34.696 SO libspdk_bdev_gpt.so.6.0 00:03:34.696 CC module/bdev/raid/bdev_raid.o 00:03:34.696 SYMLINK libspdk_bdev_gpt.so 00:03:34.696 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.696 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.696 CC module/bdev/split/vbdev_split.o 00:03:34.696 LIB libspdk_bdev_null.a 00:03:34.696 SO libspdk_bdev_null.so.6.0 00:03:34.696 CC module/bdev/nvme/nvme_rpc.o 00:03:34.697 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.697 SYMLINK libspdk_bdev_null.so 00:03:34.697 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.697 LIB libspdk_bdev_passthru.a 00:03:34.697 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.697 SO libspdk_bdev_passthru.so.6.0 00:03:34.957 SYMLINK libspdk_bdev_passthru.so 00:03:34.957 LIB libspdk_bdev_split.a 00:03:34.957 SO libspdk_bdev_split.so.6.0 00:03:34.957 LIB libspdk_bdev_malloc.a 00:03:34.957 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.957 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.957 LIB libspdk_bdev_lvol.a 00:03:34.957 SO libspdk_bdev_malloc.so.6.0 00:03:34.957 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.957 CC module/bdev/xnvme/bdev_xnvme.o 00:03:34.957 SYMLINK libspdk_bdev_split.so 00:03:34.957 CC module/bdev/raid/raid0.o 00:03:34.957 SO libspdk_bdev_lvol.so.6.0 00:03:34.957 SYMLINK libspdk_bdev_malloc.so 00:03:34.957 CC module/bdev/raid/raid1.o 00:03:34.957 SYMLINK libspdk_bdev_lvol.so 00:03:34.957 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.957 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:35.219 CC module/bdev/nvme/vbdev_opal.o 00:03:35.219 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.219 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.219 CC module/bdev/raid/concat.o 00:03:35.219 LIB libspdk_bdev_xnvme.a 00:03:35.219 LIB libspdk_bdev_zone_block.a 00:03:35.219 SO libspdk_bdev_xnvme.so.3.0 00:03:35.219 SO libspdk_bdev_zone_block.so.6.0 00:03:35.219 SYMLINK libspdk_bdev_xnvme.so 00:03:35.219 SYMLINK libspdk_bdev_zone_block.so 00:03:35.480 CC module/bdev/aio/bdev_aio.o 00:03:35.480 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.480 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.480 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.480 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.480 CC module/bdev/ftl/bdev_ftl.o 00:03:35.480 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.480 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.480 LIB libspdk_bdev_raid.a 00:03:35.480 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.480 SO libspdk_bdev_raid.so.6.0 00:03:35.480 SYMLINK libspdk_bdev_raid.so 00:03:35.740 LIB libspdk_bdev_aio.a 00:03:35.740 LIB libspdk_bdev_ftl.a 00:03:35.740 SO libspdk_bdev_aio.so.6.0 00:03:35.740 SO libspdk_bdev_ftl.so.6.0 00:03:35.740 SYMLINK libspdk_bdev_aio.so 00:03:35.740 SYMLINK libspdk_bdev_ftl.so 00:03:35.740 LIB libspdk_bdev_iscsi.a 00:03:35.740 SO libspdk_bdev_iscsi.so.6.0 00:03:35.740 LIB libspdk_bdev_virtio.a 00:03:35.740 SYMLINK libspdk_bdev_iscsi.so 00:03:35.740 SO libspdk_bdev_virtio.so.6.0 00:03:36.000 SYMLINK libspdk_bdev_virtio.so 00:03:36.568 LIB libspdk_bdev_nvme.a 00:03:36.568 SO libspdk_bdev_nvme.so.7.1 00:03:36.826 SYMLINK libspdk_bdev_nvme.so 00:03:37.086 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.086 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.086 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.086 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.086 CC module/event/subsystems/sock/sock.o 00:03:37.086 CC module/event/subsystems/fsdev/fsdev.o 00:03:37.086 CC module/event/subsystems/keyring/keyring.o 00:03:37.086 CC module/event/subsystems/vmd/vmd.o 00:03:37.086 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.348 LIB libspdk_event_vhost_blk.a 00:03:37.348 LIB libspdk_event_scheduler.a 00:03:37.348 LIB libspdk_event_fsdev.a 00:03:37.348 LIB libspdk_event_keyring.a 00:03:37.348 SO libspdk_event_fsdev.so.1.0 00:03:37.348 SO libspdk_event_vhost_blk.so.3.0 00:03:37.348 LIB libspdk_event_sock.a 00:03:37.348 SO libspdk_event_scheduler.so.4.0 00:03:37.348 SO libspdk_event_keyring.so.1.0 00:03:37.348 LIB libspdk_event_vmd.a 00:03:37.348 SO libspdk_event_sock.so.5.0 00:03:37.348 LIB libspdk_event_iobuf.a 00:03:37.348 SYMLINK libspdk_event_fsdev.so 00:03:37.348 SO libspdk_event_vmd.so.6.0 00:03:37.348 SYMLINK libspdk_event_keyring.so 00:03:37.348 SYMLINK libspdk_event_vhost_blk.so 00:03:37.348 SYMLINK libspdk_event_scheduler.so 00:03:37.348 SO libspdk_event_iobuf.so.3.0 00:03:37.348 SYMLINK libspdk_event_sock.so 00:03:37.348 SYMLINK libspdk_event_vmd.so 00:03:37.348 SYMLINK libspdk_event_iobuf.so 00:03:37.608 CC module/event/subsystems/accel/accel.o 00:03:37.608 LIB libspdk_event_accel.a 00:03:37.608 SO libspdk_event_accel.so.6.0 00:03:37.869 SYMLINK libspdk_event_accel.so 00:03:37.869 CC module/event/subsystems/bdev/bdev.o 00:03:38.130 LIB libspdk_event_bdev.a 00:03:38.130 SO libspdk_event_bdev.so.6.0 00:03:38.130 SYMLINK libspdk_event_bdev.so 00:03:38.392 CC module/event/subsystems/nbd/nbd.o 00:03:38.392 CC module/event/subsystems/scsi/scsi.o 00:03:38.392 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.392 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.392 CC module/event/subsystems/ublk/ublk.o 00:03:38.392 LIB libspdk_event_nbd.a 00:03:38.392 LIB libspdk_event_scsi.a 00:03:38.392 LIB libspdk_event_ublk.a 00:03:38.392 SO libspdk_event_nbd.so.6.0 00:03:38.392 SO libspdk_event_scsi.so.6.0 00:03:38.392 SO libspdk_event_ublk.so.3.0 00:03:38.392 LIB libspdk_event_nvmf.a 00:03:38.654 SYMLINK libspdk_event_scsi.so 00:03:38.654 SYMLINK libspdk_event_nbd.so 00:03:38.654 SYMLINK libspdk_event_ublk.so 00:03:38.654 SO libspdk_event_nvmf.so.6.0 00:03:38.654 SYMLINK libspdk_event_nvmf.so 00:03:38.654 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.655 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.917 LIB libspdk_event_vhost_scsi.a 00:03:38.917 LIB libspdk_event_iscsi.a 00:03:38.917 SO libspdk_event_vhost_scsi.so.3.0 00:03:38.917 SO libspdk_event_iscsi.so.6.0 00:03:38.917 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.917 SYMLINK libspdk_event_iscsi.so 00:03:39.177 SO libspdk.so.6.0 00:03:39.177 SYMLINK libspdk.so 00:03:39.177 CXX app/trace/trace.o 00:03:39.177 CC app/spdk_lspci/spdk_lspci.o 00:03:39.177 CC app/spdk_nvme_identify/identify.o 00:03:39.177 CC app/trace_record/trace_record.o 00:03:39.177 CC app/spdk_nvme_perf/perf.o 00:03:39.177 CC app/nvmf_tgt/nvmf_main.o 00:03:39.177 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.177 CC app/spdk_tgt/spdk_tgt.o 00:03:39.437 CC examples/util/zipf/zipf.o 00:03:39.437 CC test/thread/poller_perf/poller_perf.o 00:03:39.437 LINK spdk_lspci 00:03:39.437 LINK spdk_trace_record 00:03:39.437 LINK zipf 00:03:39.437 LINK poller_perf 00:03:39.437 LINK iscsi_tgt 00:03:39.437 LINK nvmf_tgt 00:03:39.437 LINK spdk_tgt 00:03:39.437 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.696 CC app/spdk_top/spdk_top.o 00:03:39.696 LINK spdk_trace 00:03:39.696 CC examples/ioat/perf/perf.o 00:03:39.696 CC app/spdk_dd/spdk_dd.o 00:03:39.696 LINK spdk_nvme_discover 00:03:39.696 CC test/dma/test_dma/test_dma.o 00:03:39.696 CC test/app/bdev_svc/bdev_svc.o 00:03:39.696 TEST_HEADER include/spdk/accel.h 00:03:39.696 TEST_HEADER include/spdk/accel_module.h 00:03:39.696 CC app/fio/nvme/fio_plugin.o 00:03:39.696 TEST_HEADER include/spdk/assert.h 00:03:39.696 TEST_HEADER include/spdk/barrier.h 00:03:39.696 TEST_HEADER include/spdk/base64.h 00:03:39.696 TEST_HEADER include/spdk/bdev.h 00:03:39.696 TEST_HEADER include/spdk/bdev_module.h 00:03:39.696 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.696 TEST_HEADER include/spdk/bit_array.h 00:03:39.696 TEST_HEADER include/spdk/bit_pool.h 00:03:39.696 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.696 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.696 TEST_HEADER include/spdk/blobfs.h 00:03:39.696 TEST_HEADER include/spdk/blob.h 00:03:39.956 TEST_HEADER include/spdk/conf.h 00:03:39.956 TEST_HEADER include/spdk/config.h 00:03:39.956 TEST_HEADER include/spdk/cpuset.h 00:03:39.956 TEST_HEADER include/spdk/crc16.h 00:03:39.956 TEST_HEADER include/spdk/crc32.h 00:03:39.956 TEST_HEADER include/spdk/crc64.h 00:03:39.956 TEST_HEADER include/spdk/dif.h 00:03:39.956 TEST_HEADER include/spdk/dma.h 00:03:39.956 TEST_HEADER include/spdk/endian.h 00:03:39.956 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.956 TEST_HEADER include/spdk/env.h 00:03:39.956 TEST_HEADER include/spdk/event.h 00:03:39.956 TEST_HEADER include/spdk/fd_group.h 00:03:39.956 TEST_HEADER include/spdk/fd.h 00:03:39.956 TEST_HEADER include/spdk/file.h 00:03:39.956 TEST_HEADER include/spdk/fsdev.h 00:03:39.956 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.956 TEST_HEADER include/spdk/ftl.h 00:03:39.956 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:39.956 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.956 TEST_HEADER include/spdk/hexlify.h 00:03:39.956 TEST_HEADER include/spdk/histogram_data.h 00:03:39.956 TEST_HEADER include/spdk/idxd.h 00:03:39.956 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.956 TEST_HEADER include/spdk/init.h 00:03:39.956 TEST_HEADER include/spdk/ioat.h 00:03:39.956 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.956 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.956 TEST_HEADER include/spdk/json.h 00:03:39.956 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.956 TEST_HEADER include/spdk/keyring.h 00:03:39.956 TEST_HEADER include/spdk/keyring_module.h 00:03:39.956 TEST_HEADER include/spdk/likely.h 00:03:39.956 TEST_HEADER include/spdk/log.h 00:03:39.956 TEST_HEADER include/spdk/lvol.h 00:03:39.956 TEST_HEADER include/spdk/md5.h 00:03:39.956 TEST_HEADER include/spdk/memory.h 00:03:39.956 TEST_HEADER include/spdk/mmio.h 00:03:39.956 TEST_HEADER include/spdk/nbd.h 00:03:39.956 TEST_HEADER include/spdk/net.h 00:03:39.956 TEST_HEADER include/spdk/notify.h 00:03:39.956 TEST_HEADER include/spdk/nvme.h 00:03:39.956 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.956 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.956 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.957 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.957 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.957 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.957 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.957 TEST_HEADER include/spdk/nvmf.h 00:03:39.957 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.957 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.957 TEST_HEADER include/spdk/opal.h 00:03:39.957 TEST_HEADER include/spdk/opal_spec.h 00:03:39.957 LINK bdev_svc 00:03:39.957 TEST_HEADER include/spdk/pci_ids.h 00:03:39.957 TEST_HEADER include/spdk/pipe.h 00:03:39.957 TEST_HEADER include/spdk/queue.h 00:03:39.957 TEST_HEADER include/spdk/reduce.h 00:03:39.957 TEST_HEADER include/spdk/rpc.h 00:03:39.957 TEST_HEADER include/spdk/scheduler.h 00:03:39.957 TEST_HEADER include/spdk/scsi.h 00:03:39.957 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.957 TEST_HEADER include/spdk/sock.h 00:03:39.957 TEST_HEADER include/spdk/stdinc.h 00:03:39.957 TEST_HEADER include/spdk/string.h 00:03:39.957 TEST_HEADER include/spdk/thread.h 00:03:39.957 LINK spdk_nvme_perf 00:03:39.957 TEST_HEADER include/spdk/trace.h 00:03:39.957 TEST_HEADER include/spdk/trace_parser.h 00:03:39.957 LINK ioat_perf 00:03:39.957 TEST_HEADER include/spdk/tree.h 00:03:39.957 TEST_HEADER include/spdk/ublk.h 00:03:39.957 TEST_HEADER include/spdk/util.h 00:03:39.957 TEST_HEADER include/spdk/uuid.h 00:03:39.957 TEST_HEADER include/spdk/version.h 00:03:39.957 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.957 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.957 TEST_HEADER include/spdk/vhost.h 00:03:39.957 TEST_HEADER include/spdk/vmd.h 00:03:39.957 TEST_HEADER include/spdk/xor.h 00:03:39.957 TEST_HEADER include/spdk/zipf.h 00:03:39.957 CXX test/cpp_headers/accel.o 00:03:39.957 CC app/vhost/vhost.o 00:03:39.957 LINK spdk_nvme_identify 00:03:40.215 LINK spdk_dd 00:03:40.215 CC examples/ioat/verify/verify.o 00:03:40.215 CXX test/cpp_headers/accel_module.o 00:03:40.215 LINK vhost 00:03:40.215 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.215 LINK test_dma 00:03:40.215 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.215 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.215 LINK verify 00:03:40.215 CXX test/cpp_headers/assert.o 00:03:40.215 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.473 LINK spdk_nvme 00:03:40.473 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.473 CXX test/cpp_headers/barrier.o 00:03:40.473 LINK spdk_top 00:03:40.473 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.473 CC app/fio/bdev/fio_plugin.o 00:03:40.473 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.473 CXX test/cpp_headers/base64.o 00:03:40.473 CC examples/idxd/perf/perf.o 00:03:40.473 LINK nvme_fuzz 00:03:40.732 LINK lsvmd 00:03:40.732 CC examples/vmd/led/led.o 00:03:40.732 CXX test/cpp_headers/bdev.o 00:03:40.732 LINK interrupt_tgt 00:03:40.732 LINK led 00:03:40.732 LINK mem_callbacks 00:03:40.732 LINK vhost_fuzz 00:03:40.732 CXX test/cpp_headers/bdev_module.o 00:03:40.732 CC examples/sock/hello_world/hello_sock.o 00:03:40.990 CC examples/thread/thread/thread_ex.o 00:03:40.990 LINK idxd_perf 00:03:40.990 LINK spdk_bdev 00:03:40.990 CC test/app/histogram_perf/histogram_perf.o 00:03:40.990 CC test/env/vtophys/vtophys.o 00:03:40.990 CXX test/cpp_headers/bdev_zone.o 00:03:40.990 CC test/event/event_perf/event_perf.o 00:03:40.990 CC test/event/reactor/reactor.o 00:03:40.990 LINK hello_sock 00:03:40.990 LINK histogram_perf 00:03:40.990 LINK vtophys 00:03:40.990 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.990 LINK thread 00:03:40.990 CC test/app/jsoncat/jsoncat.o 00:03:41.248 CXX test/cpp_headers/bit_array.o 00:03:41.248 LINK event_perf 00:03:41.248 LINK reactor 00:03:41.248 CC test/env/memory/memory_ut.o 00:03:41.248 CXX test/cpp_headers/bit_pool.o 00:03:41.248 CC test/event/reactor_perf/reactor_perf.o 00:03:41.248 CC test/env/pci/pci_ut.o 00:03:41.248 LINK jsoncat 00:03:41.248 LINK env_dpdk_post_init 00:03:41.248 CC test/app/stub/stub.o 00:03:41.248 CC test/event/app_repeat/app_repeat.o 00:03:41.248 CXX test/cpp_headers/blob_bdev.o 00:03:41.248 LINK reactor_perf 00:03:41.507 CC examples/nvme/hello_world/hello_world.o 00:03:41.507 CC examples/nvme/reconnect/reconnect.o 00:03:41.507 LINK stub 00:03:41.507 LINK app_repeat 00:03:41.507 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.507 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:41.507 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.507 LINK pci_ut 00:03:41.507 LINK hello_world 00:03:41.507 CC examples/nvme/arbitration/arbitration.o 00:03:41.765 CXX test/cpp_headers/blobfs.o 00:03:41.765 LINK reconnect 00:03:41.765 CC test/event/scheduler/scheduler.o 00:03:41.765 LINK hello_fsdev 00:03:41.765 CC examples/nvme/hotplug/hotplug.o 00:03:41.765 CXX test/cpp_headers/blob.o 00:03:41.765 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.025 LINK scheduler 00:03:42.025 CC examples/nvme/abort/abort.o 00:03:42.025 CXX test/cpp_headers/conf.o 00:03:42.025 LINK arbitration 00:03:42.025 LINK cmb_copy 00:03:42.025 LINK hotplug 00:03:42.025 LINK iscsi_fuzz 00:03:42.025 CC examples/accel/perf/accel_perf.o 00:03:42.025 LINK nvme_manage 00:03:42.025 CXX test/cpp_headers/config.o 00:03:42.025 LINK memory_ut 00:03:42.025 CXX test/cpp_headers/cpuset.o 00:03:42.025 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.284 CC test/rpc_client/rpc_client_test.o 00:03:42.284 CXX test/cpp_headers/crc16.o 00:03:42.284 LINK abort 00:03:42.284 CC test/nvme/aer/aer.o 00:03:42.284 CXX test/cpp_headers/crc32.o 00:03:42.284 LINK pmr_persistence 00:03:42.284 CC examples/blob/hello_world/hello_blob.o 00:03:42.284 CC test/nvme/reset/reset.o 00:03:42.284 CC test/nvme/sgl/sgl.o 00:03:42.284 CXX test/cpp_headers/crc64.o 00:03:42.284 CXX test/cpp_headers/dif.o 00:03:42.284 LINK rpc_client_test 00:03:42.542 LINK aer 00:03:42.542 CC examples/blob/cli/blobcli.o 00:03:42.542 CXX test/cpp_headers/dma.o 00:03:42.542 CC test/nvme/e2edp/nvme_dp.o 00:03:42.542 LINK reset 00:03:42.542 CC test/nvme/overhead/overhead.o 00:03:42.542 LINK hello_blob 00:03:42.542 CC test/nvme/err_injection/err_injection.o 00:03:42.542 LINK accel_perf 00:03:42.542 LINK sgl 00:03:42.542 CXX test/cpp_headers/endian.o 00:03:42.542 CC test/nvme/startup/startup.o 00:03:42.800 LINK err_injection 00:03:42.800 CC test/nvme/reserve/reserve.o 00:03:42.800 LINK overhead 00:03:42.800 CC test/nvme/simple_copy/simple_copy.o 00:03:42.800 LINK startup 00:03:42.800 CXX test/cpp_headers/env_dpdk.o 00:03:42.800 LINK nvme_dp 00:03:42.800 CC test/nvme/connect_stress/connect_stress.o 00:03:42.800 CC test/nvme/boot_partition/boot_partition.o 00:03:42.800 CXX test/cpp_headers/env.o 00:03:42.800 LINK reserve 00:03:42.800 CXX test/cpp_headers/event.o 00:03:43.061 CXX test/cpp_headers/fd_group.o 00:03:43.061 LINK connect_stress 00:03:43.061 CC test/nvme/compliance/nvme_compliance.o 00:03:43.061 LINK blobcli 00:03:43.061 LINK boot_partition 00:03:43.061 LINK simple_copy 00:03:43.061 CC examples/bdev/hello_world/hello_bdev.o 00:03:43.061 CXX test/cpp_headers/fd.o 00:03:43.061 CC test/nvme/fused_ordering/fused_ordering.o 00:03:43.061 CXX test/cpp_headers/file.o 00:03:43.061 CC test/nvme/fdp/fdp.o 00:03:43.061 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:43.061 CC examples/bdev/bdevperf/bdevperf.o 00:03:43.320 LINK hello_bdev 00:03:43.320 CC test/accel/dif/dif.o 00:03:43.320 CXX test/cpp_headers/fsdev.o 00:03:43.320 LINK fused_ordering 00:03:43.320 CC test/blobfs/mkfs/mkfs.o 00:03:43.320 LINK nvme_compliance 00:03:43.320 LINK doorbell_aers 00:03:43.320 CXX test/cpp_headers/fsdev_module.o 00:03:43.320 CXX test/cpp_headers/ftl.o 00:03:43.320 LINK mkfs 00:03:43.320 CC test/lvol/esnap/esnap.o 00:03:43.320 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.578 LINK fdp 00:03:43.578 CC test/nvme/cuse/cuse.o 00:03:43.578 CXX test/cpp_headers/gpt_spec.o 00:03:43.578 CXX test/cpp_headers/hexlify.o 00:03:43.578 CXX test/cpp_headers/histogram_data.o 00:03:43.578 CXX test/cpp_headers/idxd.o 00:03:43.578 CXX test/cpp_headers/idxd_spec.o 00:03:43.578 CXX test/cpp_headers/init.o 00:03:43.578 CXX test/cpp_headers/ioat.o 00:03:43.578 CXX test/cpp_headers/ioat_spec.o 00:03:43.836 CXX test/cpp_headers/iscsi_spec.o 00:03:43.836 CXX test/cpp_headers/json.o 00:03:43.836 CXX test/cpp_headers/jsonrpc.o 00:03:43.836 CXX test/cpp_headers/keyring.o 00:03:43.836 CXX test/cpp_headers/keyring_module.o 00:03:43.836 CXX test/cpp_headers/likely.o 00:03:43.836 CXX test/cpp_headers/log.o 00:03:43.836 CXX test/cpp_headers/lvol.o 00:03:43.836 CXX test/cpp_headers/md5.o 00:03:43.836 CXX test/cpp_headers/memory.o 00:03:43.836 CXX test/cpp_headers/mmio.o 00:03:43.836 CXX test/cpp_headers/nbd.o 00:03:43.836 LINK dif 00:03:43.836 CXX test/cpp_headers/net.o 00:03:44.094 CXX test/cpp_headers/notify.o 00:03:44.094 CXX test/cpp_headers/nvme.o 00:03:44.094 LINK bdevperf 00:03:44.094 CXX test/cpp_headers/nvme_intel.o 00:03:44.094 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.094 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.094 CXX test/cpp_headers/nvme_spec.o 00:03:44.094 CXX test/cpp_headers/nvme_zns.o 00:03:44.094 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.094 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.094 CXX test/cpp_headers/nvmf.o 00:03:44.094 CXX test/cpp_headers/nvmf_spec.o 00:03:44.354 CXX test/cpp_headers/nvmf_transport.o 00:03:44.354 CXX test/cpp_headers/opal.o 00:03:44.354 CXX test/cpp_headers/opal_spec.o 00:03:44.354 CC examples/nvmf/nvmf/nvmf.o 00:03:44.354 CXX test/cpp_headers/pci_ids.o 00:03:44.354 CC test/bdev/bdevio/bdevio.o 00:03:44.354 CXX test/cpp_headers/pipe.o 00:03:44.354 CXX test/cpp_headers/queue.o 00:03:44.354 CXX test/cpp_headers/reduce.o 00:03:44.354 CXX test/cpp_headers/rpc.o 00:03:44.354 CXX test/cpp_headers/scheduler.o 00:03:44.354 CXX test/cpp_headers/scsi.o 00:03:44.354 CXX test/cpp_headers/scsi_spec.o 00:03:44.354 CXX test/cpp_headers/sock.o 00:03:44.354 CXX test/cpp_headers/stdinc.o 00:03:44.635 LINK nvmf 00:03:44.635 CXX test/cpp_headers/string.o 00:03:44.635 CXX test/cpp_headers/thread.o 00:03:44.635 CXX test/cpp_headers/trace.o 00:03:44.635 CXX test/cpp_headers/trace_parser.o 00:03:44.635 CXX test/cpp_headers/tree.o 00:03:44.635 CXX test/cpp_headers/ublk.o 00:03:44.635 CXX test/cpp_headers/util.o 00:03:44.635 CXX test/cpp_headers/uuid.o 00:03:44.635 CXX test/cpp_headers/version.o 00:03:44.635 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.635 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.635 CXX test/cpp_headers/vhost.o 00:03:44.635 CXX test/cpp_headers/vmd.o 00:03:44.635 CXX test/cpp_headers/xor.o 00:03:44.635 LINK bdevio 00:03:44.635 CXX test/cpp_headers/zipf.o 00:03:44.635 LINK cuse 00:03:48.828 LINK esnap 00:03:48.828 00:03:48.828 real 1m8.482s 00:03:48.828 user 6m7.895s 00:03:48.828 sys 1m3.448s 00:03:48.828 09:16:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:48.828 09:16:13 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.828 ************************************ 00:03:48.828 END TEST make 00:03:48.828 ************************************ 00:03:48.828 09:16:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.828 09:16:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.828 09:16:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.828 09:16:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.828 09:16:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.828 09:16:13 -- pm/common@44 -- $ pid=5061 00:03:48.828 09:16:13 -- pm/common@50 -- $ kill -TERM 5061 00:03:48.828 09:16:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.828 09:16:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.828 09:16:13 -- pm/common@44 -- $ pid=5062 00:03:48.828 09:16:13 -- pm/common@50 -- $ kill -TERM 5062 00:03:48.828 09:16:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:48.828 09:16:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:48.828 09:16:13 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.828 09:16:13 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.828 09:16:13 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.828 09:16:14 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.828 09:16:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.828 09:16:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.828 09:16:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.828 09:16:14 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.828 09:16:14 -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.828 09:16:14 -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.828 09:16:14 -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.828 09:16:14 -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.828 09:16:14 -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.828 09:16:14 -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.828 09:16:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.828 09:16:14 -- scripts/common.sh@344 -- # case "$op" in 00:03:48.828 09:16:14 -- scripts/common.sh@345 -- # : 1 00:03:48.828 09:16:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.828 09:16:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.829 09:16:14 -- scripts/common.sh@365 -- # decimal 1 00:03:48.829 09:16:14 -- scripts/common.sh@353 -- # local d=1 00:03:48.829 09:16:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.829 09:16:14 -- scripts/common.sh@355 -- # echo 1 00:03:48.829 09:16:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.829 09:16:14 -- scripts/common.sh@366 -- # decimal 2 00:03:48.829 09:16:14 -- scripts/common.sh@353 -- # local d=2 00:03:48.829 09:16:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.829 09:16:14 -- scripts/common.sh@355 -- # echo 2 00:03:48.829 09:16:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.829 09:16:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.829 09:16:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.829 09:16:14 -- scripts/common.sh@368 -- # return 0 00:03:48.829 09:16:14 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.829 09:16:14 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.829 --rc genhtml_branch_coverage=1 00:03:48.829 --rc genhtml_function_coverage=1 00:03:48.829 --rc genhtml_legend=1 00:03:48.829 --rc geninfo_all_blocks=1 00:03:48.829 --rc geninfo_unexecuted_blocks=1 00:03:48.829 00:03:48.829 ' 00:03:48.829 09:16:14 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.829 --rc genhtml_branch_coverage=1 00:03:48.829 --rc genhtml_function_coverage=1 00:03:48.829 --rc genhtml_legend=1 00:03:48.829 --rc geninfo_all_blocks=1 00:03:48.829 --rc geninfo_unexecuted_blocks=1 00:03:48.829 00:03:48.829 ' 00:03:48.829 09:16:14 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.829 --rc genhtml_branch_coverage=1 00:03:48.829 --rc genhtml_function_coverage=1 00:03:48.829 --rc genhtml_legend=1 00:03:48.829 --rc geninfo_all_blocks=1 00:03:48.829 --rc geninfo_unexecuted_blocks=1 00:03:48.829 00:03:48.829 ' 00:03:48.829 09:16:14 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.829 --rc genhtml_branch_coverage=1 00:03:48.829 --rc genhtml_function_coverage=1 00:03:48.829 --rc genhtml_legend=1 00:03:48.829 --rc geninfo_all_blocks=1 00:03:48.829 --rc geninfo_unexecuted_blocks=1 00:03:48.829 00:03:48.829 ' 00:03:48.829 09:16:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.829 09:16:14 -- nvmf/common.sh@7 -- # uname -s 00:03:48.829 09:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.829 09:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.829 09:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.829 09:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.829 09:16:14 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.829 09:16:14 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:48.829 09:16:14 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.829 09:16:14 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:48.829 09:16:14 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:356852dd-0bfa-4a3f-a9a5-1dc974ab9a08 00:03:48.829 09:16:14 -- nvmf/common.sh@16 -- # NVME_HOSTID=356852dd-0bfa-4a3f-a9a5-1dc974ab9a08 00:03:48.829 09:16:14 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.829 09:16:14 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:48.829 09:16:14 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:03:48.829 09:16:14 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.829 09:16:14 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.829 09:16:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:48.829 09:16:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.829 09:16:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.829 09:16:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.829 09:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.829 09:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.829 09:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.829 09:16:14 -- paths/export.sh@5 -- # export PATH 00:03:48.829 09:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.829 09:16:14 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:03:48.829 09:16:14 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:48.829 09:16:14 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:48.829 09:16:14 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:48.829 09:16:14 -- nvmf/common.sh@50 -- # : 0 00:03:48.829 09:16:14 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:48.829 09:16:14 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:48.829 09:16:14 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:48.829 09:16:14 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.829 09:16:14 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.829 09:16:14 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:48.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:48.829 09:16:14 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:48.829 09:16:14 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:48.829 09:16:14 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:48.829 09:16:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.829 09:16:14 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.829 09:16:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.829 09:16:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.829 09:16:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.829 09:16:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.829 09:16:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.829 09:16:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.829 09:16:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.829 09:16:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.829 09:16:14 -- spdk/autotest.sh@48 -- # udevadm_pid=54259 00:03:48.829 09:16:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:48.829 09:16:14 -- pm/common@17 -- # local monitor 00:03:48.829 09:16:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.829 09:16:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.829 09:16:14 -- pm/common@25 -- # sleep 1 00:03:48.829 09:16:14 -- pm/common@21 -- # date +%s 00:03:48.829 09:16:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.829 09:16:14 -- pm/common@21 -- # date +%s 00:03:48.829 09:16:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732094174 00:03:48.829 09:16:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732094174 00:03:48.829 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732094174_collect-cpu-load.pm.log 00:03:48.829 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732094174_collect-vmstat.pm.log 00:03:49.766 09:16:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:49.766 09:16:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:49.766 09:16:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.766 09:16:15 -- common/autotest_common.sh@10 -- # set +x 00:03:49.766 09:16:15 -- spdk/autotest.sh@59 -- # create_test_list 00:03:49.766 09:16:15 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:49.766 09:16:15 -- common/autotest_common.sh@10 -- # set +x 00:03:49.766 09:16:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:49.766 09:16:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:49.766 09:16:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:49.766 09:16:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:49.766 09:16:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:49.766 09:16:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:49.766 09:16:15 -- common/autotest_common.sh@1457 -- # uname 00:03:49.766 09:16:15 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:49.766 09:16:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:49.766 09:16:15 -- common/autotest_common.sh@1477 -- # uname 00:03:49.766 09:16:15 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:49.766 09:16:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:49.766 09:16:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:49.766 lcov: LCOV version 1.15 00:03:49.766 09:16:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:04.673 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.673 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.587 09:16:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:19.587 09:16:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.587 09:16:43 -- common/autotest_common.sh@10 -- # set +x 00:04:19.587 09:16:43 -- spdk/autotest.sh@78 -- # rm -f 00:04:19.587 09:16:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.587 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:19.587 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:19.587 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:19.587 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:19.587 09:16:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.587 09:16:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:19.587 09:16:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:19.587 09:16:44 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:19.587 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.587 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:19.587 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:19.587 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.587 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:19.587 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:19.587 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.587 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:19.587 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:19.587 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.587 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:19.587 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:19.587 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.587 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.588 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:19.588 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:19.588 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:19.588 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.588 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.588 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:19.588 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:19.588 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:19.588 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.588 09:16:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.588 09:16:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:19.588 09:16:44 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:19.588 09:16:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:19.588 09:16:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.588 09:16:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.588 09:16:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.588 09:16:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.588 09:16:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.588 No valid GPT data, bailing 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # pt= 00:04:19.588 09:16:44 -- scripts/common.sh@395 -- # return 1 00:04:19.588 09:16:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.588 1+0 records in 00:04:19.588 1+0 records out 00:04:19.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274408 s, 38.2 MB/s 00:04:19.588 09:16:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.588 09:16:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:19.588 09:16:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:19.588 09:16:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:19.588 No valid GPT data, bailing 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # pt= 00:04:19.588 09:16:44 -- scripts/common.sh@395 -- # return 1 00:04:19.588 09:16:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:19.588 1+0 records in 00:04:19.588 1+0 records out 00:04:19.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00928356 s, 113 MB/s 00:04:19.588 09:16:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.588 09:16:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:19.588 09:16:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:19.588 09:16:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:19.588 No valid GPT data, bailing 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # pt= 00:04:19.588 09:16:44 -- scripts/common.sh@395 -- # return 1 00:04:19.588 09:16:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:19.588 1+0 records in 00:04:19.588 1+0 records out 00:04:19.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050837 s, 206 MB/s 00:04:19.588 09:16:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.588 09:16:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:19.588 09:16:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:19.588 09:16:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:19.588 No valid GPT data, bailing 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # pt= 00:04:19.588 09:16:44 -- scripts/common.sh@395 -- # return 1 00:04:19.588 09:16:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:19.588 1+0 records in 00:04:19.588 1+0 records out 00:04:19.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612599 s, 171 MB/s 00:04:19.588 09:16:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.588 09:16:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:19.588 09:16:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:19.588 09:16:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:19.588 No valid GPT data, bailing 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # pt= 00:04:19.588 09:16:44 -- scripts/common.sh@395 -- # return 1 00:04:19.588 09:16:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:19.588 1+0 records in 00:04:19.588 1+0 records out 00:04:19.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456544 s, 230 MB/s 00:04:19.588 09:16:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.588 09:16:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.588 09:16:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:19.588 09:16:44 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:19.588 09:16:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:19.588 No valid GPT data, bailing 00:04:19.588 09:16:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:19.588 09:16:45 -- scripts/common.sh@394 -- # pt= 00:04:19.588 09:16:45 -- scripts/common.sh@395 -- # return 1 00:04:19.588 09:16:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:19.588 1+0 records in 00:04:19.588 1+0 records out 00:04:19.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454727 s, 231 MB/s 00:04:19.588 09:16:45 -- spdk/autotest.sh@105 -- # sync 00:04:19.849 09:16:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.849 09:16:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.849 09:16:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.792 09:16:46 -- spdk/autotest.sh@111 -- # uname -s 00:04:21.792 09:16:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:21.792 09:16:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:21.792 09:16:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.625 Hugepages 00:04:22.625 node hugesize free / total 00:04:22.625 node0 1048576kB 0 / 0 00:04:22.625 node0 2048kB 0 / 0 00:04:22.625 00:04:22.625 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.625 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:22.625 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:22.886 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:22.886 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:22.886 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:22.886 09:16:48 -- spdk/autotest.sh@117 -- # uname -s 00:04:22.886 09:16:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:22.886 09:16:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:22.886 09:16:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.031 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.031 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.031 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.031 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.290 09:16:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:25.229 09:16:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:25.229 09:16:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:25.229 09:16:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:25.229 09:16:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:25.229 09:16:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.229 09:16:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.229 09:16:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.229 09:16:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.229 09:16:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.229 09:16:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:25.229 09:16:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:25.229 09:16:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.751 Waiting for block devices as requested 00:04:25.751 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.751 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.012 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.012 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.309 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:31.309 09:16:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:31.309 09:16:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:31.309 09:16:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.309 09:16:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:31.309 09:16:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:31.309 09:16:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:31.309 09:16:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:31.309 09:16:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:31.309 09:16:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:31.309 09:16:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:31.309 09:16:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:31.309 09:16:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:31.309 09:16:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:31.309 09:16:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1543 -- # continue 00:04:31.309 09:16:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:31.309 09:16:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.309 09:16:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:31.309 09:16:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:31.309 09:16:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:31.309 09:16:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:31.309 09:16:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:31.309 09:16:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:31.309 09:16:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:31.310 09:16:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1543 -- # continue 00:04:31.310 09:16:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:31.310 09:16:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:31.310 09:16:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:31.310 09:16:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:31.310 09:16:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:31.310 09:16:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:31.310 09:16:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1543 -- # continue 00:04:31.310 09:16:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:31.310 09:16:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:31.310 09:16:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:31.310 09:16:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:31.310 09:16:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:31.310 09:16:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:31.310 09:16:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:31.310 09:16:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:31.310 09:16:56 -- common/autotest_common.sh@1543 -- # continue 00:04:31.310 09:16:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:31.310 09:16:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.310 09:16:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.310 09:16:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:31.310 09:16:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.310 09:16:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.310 09:16:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.135 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.135 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.135 09:16:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:32.135 09:16:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.135 09:16:57 -- common/autotest_common.sh@10 -- # set +x 00:04:32.394 09:16:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:32.394 09:16:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:32.394 09:16:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.394 09:16:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:32.394 09:16:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:32.394 09:16:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:32.394 09:16:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:32.394 09:16:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:32.394 09:16:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.394 09:16:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.394 09:16:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.394 09:16:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.394 09:16:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.394 09:16:57 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:32.394 09:16:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:32.394 09:16:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.394 09:16:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.394 09:16:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.394 09:16:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.394 09:16:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.394 09:16:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.394 09:16:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:32.394 09:16:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.394 09:16:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.394 09:16:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:32.394 09:16:57 -- common/autotest_common.sh@1572 -- # return 0 00:04:32.394 09:16:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:32.394 09:16:57 -- common/autotest_common.sh@1580 -- # return 0 00:04:32.394 09:16:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.394 09:16:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.394 09:16:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.394 09:16:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.394 09:16:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.394 09:16:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.394 09:16:57 -- common/autotest_common.sh@10 -- # set +x 00:04:32.394 09:16:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.394 09:16:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.395 09:16:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.395 09:16:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.395 09:16:57 -- common/autotest_common.sh@10 -- # set +x 00:04:32.395 ************************************ 00:04:32.395 START TEST env 00:04:32.395 ************************************ 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.395 * Looking for test storage... 00:04:32.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.395 09:16:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.395 09:16:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.395 09:16:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.395 09:16:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.395 09:16:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.395 09:16:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.395 09:16:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.395 09:16:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.395 09:16:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.395 09:16:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.395 09:16:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.395 09:16:57 env -- scripts/common.sh@344 -- # case "$op" in 00:04:32.395 09:16:57 env -- scripts/common.sh@345 -- # : 1 00:04:32.395 09:16:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.395 09:16:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.395 09:16:57 env -- scripts/common.sh@365 -- # decimal 1 00:04:32.395 09:16:57 env -- scripts/common.sh@353 -- # local d=1 00:04:32.395 09:16:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.395 09:16:57 env -- scripts/common.sh@355 -- # echo 1 00:04:32.395 09:16:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.395 09:16:57 env -- scripts/common.sh@366 -- # decimal 2 00:04:32.395 09:16:57 env -- scripts/common.sh@353 -- # local d=2 00:04:32.395 09:16:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.395 09:16:57 env -- scripts/common.sh@355 -- # echo 2 00:04:32.395 09:16:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.395 09:16:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.395 09:16:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.395 09:16:57 env -- scripts/common.sh@368 -- # return 0 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.395 --rc genhtml_branch_coverage=1 00:04:32.395 --rc genhtml_function_coverage=1 00:04:32.395 --rc genhtml_legend=1 00:04:32.395 --rc geninfo_all_blocks=1 00:04:32.395 --rc geninfo_unexecuted_blocks=1 00:04:32.395 00:04:32.395 ' 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.395 --rc genhtml_branch_coverage=1 00:04:32.395 --rc genhtml_function_coverage=1 00:04:32.395 --rc genhtml_legend=1 00:04:32.395 --rc geninfo_all_blocks=1 00:04:32.395 --rc geninfo_unexecuted_blocks=1 00:04:32.395 00:04:32.395 ' 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.395 --rc genhtml_branch_coverage=1 00:04:32.395 --rc genhtml_function_coverage=1 00:04:32.395 --rc genhtml_legend=1 00:04:32.395 --rc geninfo_all_blocks=1 00:04:32.395 --rc geninfo_unexecuted_blocks=1 00:04:32.395 00:04:32.395 ' 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.395 --rc genhtml_branch_coverage=1 00:04:32.395 --rc genhtml_function_coverage=1 00:04:32.395 --rc genhtml_legend=1 00:04:32.395 --rc geninfo_all_blocks=1 00:04:32.395 --rc geninfo_unexecuted_blocks=1 00:04:32.395 00:04:32.395 ' 00:04:32.395 09:16:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.395 09:16:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.395 09:16:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.395 ************************************ 00:04:32.395 START TEST env_memory 00:04:32.395 ************************************ 00:04:32.395 09:16:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.653 00:04:32.653 00:04:32.653 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.653 http://cunit.sourceforge.net/ 00:04:32.653 00:04:32.653 00:04:32.653 Suite: memory 00:04:32.653 Test: alloc and free memory map ...[2024-11-20 09:16:57.896704] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.653 passed 00:04:32.653 Test: mem map translation ...[2024-11-20 09:16:57.935307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.653 [2024-11-20 09:16:57.935357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.653 [2024-11-20 09:16:57.935416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.653 [2024-11-20 09:16:57.935431] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.653 passed 00:04:32.653 Test: mem map registration ...[2024-11-20 09:16:58.003469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.653 [2024-11-20 09:16:58.003514] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:32.653 passed 00:04:32.653 Test: mem map adjacent registrations ...passed 00:04:32.653 00:04:32.653 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.653 suites 1 1 n/a 0 0 00:04:32.653 tests 4 4 4 0 0 00:04:32.653 asserts 152 152 152 0 n/a 00:04:32.653 00:04:32.653 Elapsed time = 0.232 seconds 00:04:32.911 00:04:32.911 real 0m0.263s 00:04:32.911 user 0m0.231s 00:04:32.911 sys 0m0.024s 00:04:32.911 09:16:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.911 09:16:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.911 ************************************ 00:04:32.911 END TEST env_memory 00:04:32.911 ************************************ 00:04:32.911 09:16:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.911 09:16:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.911 09:16:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.911 09:16:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.911 ************************************ 00:04:32.911 START TEST env_vtophys 00:04:32.911 ************************************ 00:04:32.911 09:16:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.911 EAL: lib.eal log level changed from notice to debug 00:04:32.911 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 1 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 2 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 3 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 4 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 5 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 6 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 7 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 8 as core 0 on socket 0 00:04:32.911 EAL: Detected lcore 9 as core 0 on socket 0 00:04:32.912 EAL: Maximum logical cores by configuration: 128 00:04:32.912 EAL: Detected CPU lcores: 10 00:04:32.912 EAL: Detected NUMA nodes: 1 00:04:32.912 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.912 EAL: Detected shared linkage of DPDK 00:04:32.912 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.912 EAL: Selected IOVA mode 'PA' 00:04:32.912 EAL: Probing VFIO support... 00:04:32.912 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:32.912 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:32.912 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.912 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.912 EAL: Setting up physically contiguous memory... 00:04:32.912 EAL: Setting maximum number of open files to 524288 00:04:32.912 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.912 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.912 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.912 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.912 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.912 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.912 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.912 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.912 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.912 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.912 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.912 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.912 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.912 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.912 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.912 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.912 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.912 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.912 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.912 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.912 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.912 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.912 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.912 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.912 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.912 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.912 EAL: Hugepages will be freed exactly as allocated. 00:04:32.912 EAL: No shared files mode enabled, IPC is disabled 00:04:32.912 EAL: No shared files mode enabled, IPC is disabled 00:04:32.912 EAL: TSC frequency is ~2600000 KHz 00:04:32.912 EAL: Main lcore 0 is ready (tid=7fc767b16a40;cpuset=[0]) 00:04:32.912 EAL: Trying to obtain current memory policy. 00:04:32.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.912 EAL: Restoring previous memory policy: 0 00:04:32.912 EAL: request: mp_malloc_sync 00:04:32.912 EAL: No shared files mode enabled, IPC is disabled 00:04:32.912 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.912 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:32.912 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.912 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.912 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:32.912 00:04:32.912 00:04:32.912 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.912 http://cunit.sourceforge.net/ 00:04:32.912 00:04:32.912 00:04:32.912 Suite: components_suite 00:04:33.478 Test: vtophys_malloc_test ...passed 00:04:33.478 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.478 EAL: Restoring previous memory policy: 4 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.478 EAL: Trying to obtain current memory policy. 00:04:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.478 EAL: Restoring previous memory policy: 4 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.478 EAL: Trying to obtain current memory policy. 00:04:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.478 EAL: Restoring previous memory policy: 4 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.478 EAL: Trying to obtain current memory policy. 00:04:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.478 EAL: Restoring previous memory policy: 4 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.478 EAL: Trying to obtain current memory policy. 00:04:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.478 EAL: Restoring previous memory policy: 4 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.478 EAL: Trying to obtain current memory policy. 00:04:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.478 EAL: Restoring previous memory policy: 4 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.478 EAL: request: mp_malloc_sync 00:04:33.478 EAL: No shared files mode enabled, IPC is disabled 00:04:33.478 EAL: Heap on socket 0 was shrunk by 66MB 00:04:33.736 EAL: Trying to obtain current memory policy. 00:04:33.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.736 EAL: Restoring previous memory policy: 4 00:04:33.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.736 EAL: request: mp_malloc_sync 00:04:33.736 EAL: No shared files mode enabled, IPC is disabled 00:04:33.736 EAL: Heap on socket 0 was expanded by 130MB 00:04:33.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.736 EAL: request: mp_malloc_sync 00:04:33.736 EAL: No shared files mode enabled, IPC is disabled 00:04:33.736 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.007 EAL: Trying to obtain current memory policy. 00:04:34.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.007 EAL: Restoring previous memory policy: 4 00:04:34.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.007 EAL: request: mp_malloc_sync 00:04:34.007 EAL: No shared files mode enabled, IPC is disabled 00:04:34.007 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.265 EAL: request: mp_malloc_sync 00:04:34.265 EAL: No shared files mode enabled, IPC is disabled 00:04:34.265 EAL: Heap on socket 0 was shrunk by 258MB 00:04:34.524 EAL: Trying to obtain current memory policy. 00:04:34.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.524 EAL: Restoring previous memory policy: 4 00:04:34.524 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.524 EAL: request: mp_malloc_sync 00:04:34.524 EAL: No shared files mode enabled, IPC is disabled 00:04:34.524 EAL: Heap on socket 0 was expanded by 514MB 00:04:35.093 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.093 EAL: request: mp_malloc_sync 00:04:35.093 EAL: No shared files mode enabled, IPC is disabled 00:04:35.093 EAL: Heap on socket 0 was shrunk by 514MB 00:04:35.662 EAL: Trying to obtain current memory policy. 00:04:35.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.919 EAL: Restoring previous memory policy: 4 00:04:35.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.919 EAL: request: mp_malloc_sync 00:04:35.919 EAL: No shared files mode enabled, IPC is disabled 00:04:35.919 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.853 EAL: request: mp_malloc_sync 00:04:36.853 EAL: No shared files mode enabled, IPC is disabled 00:04:36.853 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.787 passed 00:04:37.787 00:04:37.787 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.787 suites 1 1 n/a 0 0 00:04:37.787 tests 2 2 2 0 0 00:04:37.787 asserts 5705 5705 5705 0 n/a 00:04:37.787 00:04:37.787 Elapsed time = 4.685 seconds 00:04:37.787 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.787 EAL: request: mp_malloc_sync 00:04:37.787 EAL: No shared files mode enabled, IPC is disabled 00:04:37.787 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.787 EAL: No shared files mode enabled, IPC is disabled 00:04:37.787 EAL: No shared files mode enabled, IPC is disabled 00:04:37.787 EAL: No shared files mode enabled, IPC is disabled 00:04:37.787 00:04:37.787 real 0m4.951s 00:04:37.787 user 0m4.175s 00:04:37.787 sys 0m0.628s 00:04:37.787 09:17:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.787 09:17:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.787 ************************************ 00:04:37.787 END TEST env_vtophys 00:04:37.787 ************************************ 00:04:37.787 09:17:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:37.787 09:17:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.787 09:17:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.787 09:17:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.787 ************************************ 00:04:37.787 START TEST env_pci 00:04:37.787 ************************************ 00:04:37.787 09:17:03 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:37.787 00:04:37.787 00:04:37.787 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.787 http://cunit.sourceforge.net/ 00:04:37.787 00:04:37.787 00:04:37.787 Suite: pci 00:04:37.787 Test: pci_hook ...[2024-11-20 09:17:03.188896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57026 has claimed it 00:04:37.787 passed 00:04:37.787 00:04:37.787 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.787 suites 1 1 n/a 0 0 00:04:37.787 tests 1 1 1 0 0 00:04:37.787 asserts 25 25 25 0 n/a 00:04:37.787 00:04:37.787 Elapsed time = 0.007 seconds 00:04:37.787 EAL: Cannot find device (10000:00:01.0) 00:04:37.787 EAL: Failed to attach device on primary process 00:04:37.787 00:04:37.787 real 0m0.057s 00:04:37.787 user 0m0.027s 00:04:37.787 sys 0m0.029s 00:04:37.787 ************************************ 00:04:37.787 09:17:03 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.787 09:17:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.787 END TEST env_pci 00:04:37.787 ************************************ 00:04:38.044 09:17:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.044 09:17:03 env -- env/env.sh@15 -- # uname 00:04:38.045 09:17:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.045 09:17:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.045 09:17:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.045 09:17:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:38.045 09:17:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.045 09:17:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.045 ************************************ 00:04:38.045 START TEST env_dpdk_post_init 00:04:38.045 ************************************ 00:04:38.045 09:17:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.045 EAL: Detected CPU lcores: 10 00:04:38.045 EAL: Detected NUMA nodes: 1 00:04:38.045 EAL: Detected shared linkage of DPDK 00:04:38.045 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.045 EAL: Selected IOVA mode 'PA' 00:04:38.045 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.045 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:38.045 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:38.045 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:38.045 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:38.302 Starting DPDK initialization... 00:04:38.302 Starting SPDK post initialization... 00:04:38.302 SPDK NVMe probe 00:04:38.302 Attaching to 0000:00:10.0 00:04:38.302 Attaching to 0000:00:11.0 00:04:38.302 Attaching to 0000:00:12.0 00:04:38.302 Attaching to 0000:00:13.0 00:04:38.302 Attached to 0000:00:10.0 00:04:38.302 Attached to 0000:00:11.0 00:04:38.302 Attached to 0000:00:13.0 00:04:38.302 Attached to 0000:00:12.0 00:04:38.302 Cleaning up... 00:04:38.302 00:04:38.302 real 0m0.246s 00:04:38.302 user 0m0.081s 00:04:38.302 sys 0m0.068s 00:04:38.302 09:17:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.302 09:17:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.302 ************************************ 00:04:38.302 END TEST env_dpdk_post_init 00:04:38.302 ************************************ 00:04:38.302 09:17:03 env -- env/env.sh@26 -- # uname 00:04:38.302 09:17:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.302 09:17:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.302 09:17:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.302 09:17:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.302 09:17:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.302 ************************************ 00:04:38.302 START TEST env_mem_callbacks 00:04:38.302 ************************************ 00:04:38.302 09:17:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.302 EAL: Detected CPU lcores: 10 00:04:38.302 EAL: Detected NUMA nodes: 1 00:04:38.302 EAL: Detected shared linkage of DPDK 00:04:38.302 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.302 EAL: Selected IOVA mode 'PA' 00:04:38.302 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.302 00:04:38.302 00:04:38.302 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.302 http://cunit.sourceforge.net/ 00:04:38.302 00:04:38.302 00:04:38.302 Suite: memory 00:04:38.302 Test: test ... 00:04:38.302 register 0x200000200000 2097152 00:04:38.302 malloc 3145728 00:04:38.302 register 0x200000400000 4194304 00:04:38.302 buf 0x2000004fffc0 len 3145728 PASSED 00:04:38.302 malloc 64 00:04:38.302 buf 0x2000004ffec0 len 64 PASSED 00:04:38.302 malloc 4194304 00:04:38.302 register 0x200000800000 6291456 00:04:38.302 buf 0x2000009fffc0 len 4194304 PASSED 00:04:38.302 free 0x2000004fffc0 3145728 00:04:38.302 free 0x2000004ffec0 64 00:04:38.302 unregister 0x200000400000 4194304 PASSED 00:04:38.302 free 0x2000009fffc0 4194304 00:04:38.302 unregister 0x200000800000 6291456 PASSED 00:04:38.302 malloc 8388608 00:04:38.302 register 0x200000400000 10485760 00:04:38.302 buf 0x2000005fffc0 len 8388608 PASSED 00:04:38.302 free 0x2000005fffc0 8388608 00:04:38.302 unregister 0x200000400000 10485760 PASSED 00:04:38.302 passed 00:04:38.302 00:04:38.302 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.302 suites 1 1 n/a 0 0 00:04:38.302 tests 1 1 1 0 0 00:04:38.302 asserts 15 15 15 0 n/a 00:04:38.302 00:04:38.302 Elapsed time = 0.047 seconds 00:04:38.561 00:04:38.561 real 0m0.209s 00:04:38.561 user 0m0.055s 00:04:38.561 sys 0m0.052s 00:04:38.561 09:17:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.561 09:17:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.561 ************************************ 00:04:38.561 END TEST env_mem_callbacks 00:04:38.561 ************************************ 00:04:38.561 ************************************ 00:04:38.561 END TEST env 00:04:38.561 ************************************ 00:04:38.561 00:04:38.561 real 0m6.092s 00:04:38.561 user 0m4.723s 00:04:38.561 sys 0m0.995s 00:04:38.561 09:17:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.561 09:17:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.561 09:17:03 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.561 09:17:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.561 09:17:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.561 09:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:38.561 ************************************ 00:04:38.561 START TEST rpc 00:04:38.561 ************************************ 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.561 * Looking for test storage... 00:04:38.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.561 09:17:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.561 09:17:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.561 09:17:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.561 09:17:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.561 09:17:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.561 09:17:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.561 09:17:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.561 09:17:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.561 09:17:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.561 09:17:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.561 09:17:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.561 09:17:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.561 09:17:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.561 09:17:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.561 09:17:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.561 --rc genhtml_branch_coverage=1 00:04:38.561 --rc genhtml_function_coverage=1 00:04:38.561 --rc genhtml_legend=1 00:04:38.561 --rc geninfo_all_blocks=1 00:04:38.561 --rc geninfo_unexecuted_blocks=1 00:04:38.561 00:04:38.561 ' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.561 --rc genhtml_branch_coverage=1 00:04:38.561 --rc genhtml_function_coverage=1 00:04:38.561 --rc genhtml_legend=1 00:04:38.561 --rc geninfo_all_blocks=1 00:04:38.561 --rc geninfo_unexecuted_blocks=1 00:04:38.561 00:04:38.561 ' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.561 --rc genhtml_branch_coverage=1 00:04:38.561 --rc genhtml_function_coverage=1 00:04:38.561 --rc genhtml_legend=1 00:04:38.561 --rc geninfo_all_blocks=1 00:04:38.561 --rc geninfo_unexecuted_blocks=1 00:04:38.561 00:04:38.561 ' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.561 --rc genhtml_branch_coverage=1 00:04:38.561 --rc genhtml_function_coverage=1 00:04:38.561 --rc genhtml_legend=1 00:04:38.561 --rc geninfo_all_blocks=1 00:04:38.561 --rc geninfo_unexecuted_blocks=1 00:04:38.561 00:04:38.561 ' 00:04:38.561 09:17:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57153 00:04:38.561 09:17:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.561 09:17:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57153 00:04:38.561 09:17:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 57153 ']' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.561 09:17:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.818 [2024-11-20 09:17:04.031545] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:38.818 [2024-11-20 09:17:04.031667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57153 ] 00:04:38.818 [2024-11-20 09:17:04.187232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.075 [2024-11-20 09:17:04.283141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.075 [2024-11-20 09:17:04.283191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57153' to capture a snapshot of events at runtime. 00:04:39.075 [2024-11-20 09:17:04.283201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.075 [2024-11-20 09:17:04.283211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.075 [2024-11-20 09:17:04.283218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57153 for offline analysis/debug. 00:04:39.075 [2024-11-20 09:17:04.284052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.640 09:17:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.640 09:17:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.640 09:17:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.640 09:17:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.640 09:17:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.640 09:17:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.641 09:17:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.641 09:17:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.641 09:17:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.641 ************************************ 00:04:39.641 START TEST rpc_integrity 00:04:39.641 ************************************ 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.641 { 00:04:39.641 "name": "Malloc0", 00:04:39.641 "aliases": [ 00:04:39.641 "92e998dd-95ca-4e62-93f5-8a3beb35f13f" 00:04:39.641 ], 00:04:39.641 "product_name": "Malloc disk", 00:04:39.641 "block_size": 512, 00:04:39.641 "num_blocks": 16384, 00:04:39.641 "uuid": "92e998dd-95ca-4e62-93f5-8a3beb35f13f", 00:04:39.641 "assigned_rate_limits": { 00:04:39.641 "rw_ios_per_sec": 0, 00:04:39.641 "rw_mbytes_per_sec": 0, 00:04:39.641 "r_mbytes_per_sec": 0, 00:04:39.641 "w_mbytes_per_sec": 0 00:04:39.641 }, 00:04:39.641 "claimed": false, 00:04:39.641 "zoned": false, 00:04:39.641 "supported_io_types": { 00:04:39.641 "read": true, 00:04:39.641 "write": true, 00:04:39.641 "unmap": true, 00:04:39.641 "flush": true, 00:04:39.641 "reset": true, 00:04:39.641 "nvme_admin": false, 00:04:39.641 "nvme_io": false, 00:04:39.641 "nvme_io_md": false, 00:04:39.641 "write_zeroes": true, 00:04:39.641 "zcopy": true, 00:04:39.641 "get_zone_info": false, 00:04:39.641 "zone_management": false, 00:04:39.641 "zone_append": false, 00:04:39.641 "compare": false, 00:04:39.641 "compare_and_write": false, 00:04:39.641 "abort": true, 00:04:39.641 "seek_hole": false, 00:04:39.641 "seek_data": false, 00:04:39.641 "copy": true, 00:04:39.641 "nvme_iov_md": false 00:04:39.641 }, 00:04:39.641 "memory_domains": [ 00:04:39.641 { 00:04:39.641 "dma_device_id": "system", 00:04:39.641 "dma_device_type": 1 00:04:39.641 }, 00:04:39.641 { 00:04:39.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.641 "dma_device_type": 2 00:04:39.641 } 00:04:39.641 ], 00:04:39.641 "driver_specific": {} 00:04:39.641 } 00:04:39.641 ]' 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.641 [2024-11-20 09:17:04.993362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.641 [2024-11-20 09:17:04.993526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.641 [2024-11-20 09:17:04.993559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:39.641 [2024-11-20 09:17:04.993571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.641 [2024-11-20 09:17:04.995815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.641 [2024-11-20 09:17:04.995854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.641 Passthru0 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.641 09:17:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.641 09:17:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.641 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.641 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.641 { 00:04:39.641 "name": "Malloc0", 00:04:39.642 "aliases": [ 00:04:39.642 "92e998dd-95ca-4e62-93f5-8a3beb35f13f" 00:04:39.642 ], 00:04:39.642 "product_name": "Malloc disk", 00:04:39.642 "block_size": 512, 00:04:39.642 "num_blocks": 16384, 00:04:39.642 "uuid": "92e998dd-95ca-4e62-93f5-8a3beb35f13f", 00:04:39.642 "assigned_rate_limits": { 00:04:39.642 "rw_ios_per_sec": 0, 00:04:39.642 "rw_mbytes_per_sec": 0, 00:04:39.642 "r_mbytes_per_sec": 0, 00:04:39.642 "w_mbytes_per_sec": 0 00:04:39.642 }, 00:04:39.642 "claimed": true, 00:04:39.642 "claim_type": "exclusive_write", 00:04:39.642 "zoned": false, 00:04:39.642 "supported_io_types": { 00:04:39.642 "read": true, 00:04:39.642 "write": true, 00:04:39.642 "unmap": true, 00:04:39.642 "flush": true, 00:04:39.642 "reset": true, 00:04:39.642 "nvme_admin": false, 00:04:39.642 "nvme_io": false, 00:04:39.642 "nvme_io_md": false, 00:04:39.642 "write_zeroes": true, 00:04:39.642 "zcopy": true, 00:04:39.642 "get_zone_info": false, 00:04:39.642 "zone_management": false, 00:04:39.642 "zone_append": false, 00:04:39.642 "compare": false, 00:04:39.642 "compare_and_write": false, 00:04:39.642 "abort": true, 00:04:39.642 "seek_hole": false, 00:04:39.642 "seek_data": false, 00:04:39.642 "copy": true, 00:04:39.642 "nvme_iov_md": false 00:04:39.642 }, 00:04:39.642 "memory_domains": [ 00:04:39.642 { 00:04:39.642 "dma_device_id": "system", 00:04:39.642 "dma_device_type": 1 00:04:39.642 }, 00:04:39.642 { 00:04:39.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.642 "dma_device_type": 2 00:04:39.642 } 00:04:39.642 ], 00:04:39.642 "driver_specific": {} 00:04:39.642 }, 00:04:39.642 { 00:04:39.642 "name": "Passthru0", 00:04:39.642 "aliases": [ 00:04:39.642 "554f51d4-6585-58b5-af96-771f8662b171" 00:04:39.642 ], 00:04:39.642 "product_name": "passthru", 00:04:39.642 "block_size": 512, 00:04:39.642 "num_blocks": 16384, 00:04:39.642 "uuid": "554f51d4-6585-58b5-af96-771f8662b171", 00:04:39.642 "assigned_rate_limits": { 00:04:39.642 "rw_ios_per_sec": 0, 00:04:39.642 "rw_mbytes_per_sec": 0, 00:04:39.642 "r_mbytes_per_sec": 0, 00:04:39.642 "w_mbytes_per_sec": 0 00:04:39.642 }, 00:04:39.642 "claimed": false, 00:04:39.642 "zoned": false, 00:04:39.642 "supported_io_types": { 00:04:39.642 "read": true, 00:04:39.642 "write": true, 00:04:39.642 "unmap": true, 00:04:39.642 "flush": true, 00:04:39.642 "reset": true, 00:04:39.642 "nvme_admin": false, 00:04:39.642 "nvme_io": false, 00:04:39.642 "nvme_io_md": false, 00:04:39.642 "write_zeroes": true, 00:04:39.642 "zcopy": true, 00:04:39.642 "get_zone_info": false, 00:04:39.642 "zone_management": false, 00:04:39.642 "zone_append": false, 00:04:39.642 "compare": false, 00:04:39.642 "compare_and_write": false, 00:04:39.642 "abort": true, 00:04:39.642 "seek_hole": false, 00:04:39.642 "seek_data": false, 00:04:39.642 "copy": true, 00:04:39.642 "nvme_iov_md": false 00:04:39.642 }, 00:04:39.642 "memory_domains": [ 00:04:39.642 { 00:04:39.642 "dma_device_id": "system", 00:04:39.642 "dma_device_type": 1 00:04:39.642 }, 00:04:39.642 { 00:04:39.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.642 "dma_device_type": 2 00:04:39.642 } 00:04:39.642 ], 00:04:39.642 "driver_specific": { 00:04:39.642 "passthru": { 00:04:39.642 "name": "Passthru0", 00:04:39.642 "base_bdev_name": "Malloc0" 00:04:39.642 } 00:04:39.642 } 00:04:39.642 } 00:04:39.642 ]' 00:04:39.642 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.642 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.642 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.642 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.642 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.642 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.899 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.899 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.899 ************************************ 00:04:39.899 END TEST rpc_integrity 00:04:39.899 ************************************ 00:04:39.899 09:17:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.899 00:04:39.899 real 0m0.250s 00:04:39.899 user 0m0.127s 00:04:39.899 sys 0m0.037s 00:04:39.899 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.899 09:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.899 09:17:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.899 09:17:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.899 09:17:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.899 09:17:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.899 ************************************ 00:04:39.899 START TEST rpc_plugins 00:04:39.899 ************************************ 00:04:39.899 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:39.899 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.899 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.899 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.900 { 00:04:39.900 "name": "Malloc1", 00:04:39.900 "aliases": [ 00:04:39.900 "2f7dac51-8d3d-4b10-8d1b-8e8f210d2b84" 00:04:39.900 ], 00:04:39.900 "product_name": "Malloc disk", 00:04:39.900 "block_size": 4096, 00:04:39.900 "num_blocks": 256, 00:04:39.900 "uuid": "2f7dac51-8d3d-4b10-8d1b-8e8f210d2b84", 00:04:39.900 "assigned_rate_limits": { 00:04:39.900 "rw_ios_per_sec": 0, 00:04:39.900 "rw_mbytes_per_sec": 0, 00:04:39.900 "r_mbytes_per_sec": 0, 00:04:39.900 "w_mbytes_per_sec": 0 00:04:39.900 }, 00:04:39.900 "claimed": false, 00:04:39.900 "zoned": false, 00:04:39.900 "supported_io_types": { 00:04:39.900 "read": true, 00:04:39.900 "write": true, 00:04:39.900 "unmap": true, 00:04:39.900 "flush": true, 00:04:39.900 "reset": true, 00:04:39.900 "nvme_admin": false, 00:04:39.900 "nvme_io": false, 00:04:39.900 "nvme_io_md": false, 00:04:39.900 "write_zeroes": true, 00:04:39.900 "zcopy": true, 00:04:39.900 "get_zone_info": false, 00:04:39.900 "zone_management": false, 00:04:39.900 "zone_append": false, 00:04:39.900 "compare": false, 00:04:39.900 "compare_and_write": false, 00:04:39.900 "abort": true, 00:04:39.900 "seek_hole": false, 00:04:39.900 "seek_data": false, 00:04:39.900 "copy": true, 00:04:39.900 "nvme_iov_md": false 00:04:39.900 }, 00:04:39.900 "memory_domains": [ 00:04:39.900 { 00:04:39.900 "dma_device_id": "system", 00:04:39.900 "dma_device_type": 1 00:04:39.900 }, 00:04:39.900 { 00:04:39.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.900 "dma_device_type": 2 00:04:39.900 } 00:04:39.900 ], 00:04:39.900 "driver_specific": {} 00:04:39.900 } 00:04:39.900 ]' 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.900 ************************************ 00:04:39.900 END TEST rpc_plugins 00:04:39.900 ************************************ 00:04:39.900 09:17:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.900 00:04:39.900 real 0m0.114s 00:04:39.900 user 0m0.060s 00:04:39.900 sys 0m0.018s 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.900 09:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.900 09:17:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.900 09:17:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.900 09:17:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.900 09:17:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.158 ************************************ 00:04:40.158 START TEST rpc_trace_cmd_test 00:04:40.158 ************************************ 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.158 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.158 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57153", 00:04:40.158 "tpoint_group_mask": "0x8", 00:04:40.158 "iscsi_conn": { 00:04:40.158 "mask": "0x2", 00:04:40.158 "tpoint_mask": "0x0" 00:04:40.158 }, 00:04:40.158 "scsi": { 00:04:40.158 "mask": "0x4", 00:04:40.158 "tpoint_mask": "0x0" 00:04:40.158 }, 00:04:40.158 "bdev": { 00:04:40.158 "mask": "0x8", 00:04:40.158 "tpoint_mask": "0xffffffffffffffff" 00:04:40.158 }, 00:04:40.158 "nvmf_rdma": { 00:04:40.158 "mask": "0x10", 00:04:40.158 "tpoint_mask": "0x0" 00:04:40.158 }, 00:04:40.158 "nvmf_tcp": { 00:04:40.158 "mask": "0x20", 00:04:40.158 "tpoint_mask": "0x0" 00:04:40.158 }, 00:04:40.158 "ftl": { 00:04:40.158 "mask": "0x40", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "blobfs": { 00:04:40.159 "mask": "0x80", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "dsa": { 00:04:40.159 "mask": "0x200", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "thread": { 00:04:40.159 "mask": "0x400", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "nvme_pcie": { 00:04:40.159 "mask": "0x800", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "iaa": { 00:04:40.159 "mask": "0x1000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "nvme_tcp": { 00:04:40.159 "mask": "0x2000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "bdev_nvme": { 00:04:40.159 "mask": "0x4000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "sock": { 00:04:40.159 "mask": "0x8000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "blob": { 00:04:40.159 "mask": "0x10000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "bdev_raid": { 00:04:40.159 "mask": "0x20000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 }, 00:04:40.159 "scheduler": { 00:04:40.159 "mask": "0x40000", 00:04:40.159 "tpoint_mask": "0x0" 00:04:40.159 } 00:04:40.159 }' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.159 ************************************ 00:04:40.159 END TEST rpc_trace_cmd_test 00:04:40.159 ************************************ 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.159 00:04:40.159 real 0m0.172s 00:04:40.159 user 0m0.131s 00:04:40.159 sys 0m0.032s 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.159 09:17:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.159 09:17:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.159 09:17:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.159 09:17:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.159 09:17:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.159 09:17:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.159 09:17:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.159 ************************************ 00:04:40.159 START TEST rpc_daemon_integrity 00:04:40.159 ************************************ 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.159 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.417 { 00:04:40.417 "name": "Malloc2", 00:04:40.417 "aliases": [ 00:04:40.417 "e355adfc-36fb-4424-b846-557fc29d411c" 00:04:40.417 ], 00:04:40.417 "product_name": "Malloc disk", 00:04:40.417 "block_size": 512, 00:04:40.417 "num_blocks": 16384, 00:04:40.417 "uuid": "e355adfc-36fb-4424-b846-557fc29d411c", 00:04:40.417 "assigned_rate_limits": { 00:04:40.417 "rw_ios_per_sec": 0, 00:04:40.417 "rw_mbytes_per_sec": 0, 00:04:40.417 "r_mbytes_per_sec": 0, 00:04:40.417 "w_mbytes_per_sec": 0 00:04:40.417 }, 00:04:40.417 "claimed": false, 00:04:40.417 "zoned": false, 00:04:40.417 "supported_io_types": { 00:04:40.417 "read": true, 00:04:40.417 "write": true, 00:04:40.417 "unmap": true, 00:04:40.417 "flush": true, 00:04:40.417 "reset": true, 00:04:40.417 "nvme_admin": false, 00:04:40.417 "nvme_io": false, 00:04:40.417 "nvme_io_md": false, 00:04:40.417 "write_zeroes": true, 00:04:40.417 "zcopy": true, 00:04:40.417 "get_zone_info": false, 00:04:40.417 "zone_management": false, 00:04:40.417 "zone_append": false, 00:04:40.417 "compare": false, 00:04:40.417 "compare_and_write": false, 00:04:40.417 "abort": true, 00:04:40.417 "seek_hole": false, 00:04:40.417 "seek_data": false, 00:04:40.417 "copy": true, 00:04:40.417 "nvme_iov_md": false 00:04:40.417 }, 00:04:40.417 "memory_domains": [ 00:04:40.417 { 00:04:40.417 "dma_device_id": "system", 00:04:40.417 "dma_device_type": 1 00:04:40.417 }, 00:04:40.417 { 00:04:40.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.417 "dma_device_type": 2 00:04:40.417 } 00:04:40.417 ], 00:04:40.417 "driver_specific": {} 00:04:40.417 } 00:04:40.417 ]' 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.417 [2024-11-20 09:17:05.684127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.417 [2024-11-20 09:17:05.684180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.417 [2024-11-20 09:17:05.684201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:40.417 [2024-11-20 09:17:05.684213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.417 [2024-11-20 09:17:05.686383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.417 [2024-11-20 09:17:05.686417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.417 Passthru0 00:04:40.417 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.418 { 00:04:40.418 "name": "Malloc2", 00:04:40.418 "aliases": [ 00:04:40.418 "e355adfc-36fb-4424-b846-557fc29d411c" 00:04:40.418 ], 00:04:40.418 "product_name": "Malloc disk", 00:04:40.418 "block_size": 512, 00:04:40.418 "num_blocks": 16384, 00:04:40.418 "uuid": "e355adfc-36fb-4424-b846-557fc29d411c", 00:04:40.418 "assigned_rate_limits": { 00:04:40.418 "rw_ios_per_sec": 0, 00:04:40.418 "rw_mbytes_per_sec": 0, 00:04:40.418 "r_mbytes_per_sec": 0, 00:04:40.418 "w_mbytes_per_sec": 0 00:04:40.418 }, 00:04:40.418 "claimed": true, 00:04:40.418 "claim_type": "exclusive_write", 00:04:40.418 "zoned": false, 00:04:40.418 "supported_io_types": { 00:04:40.418 "read": true, 00:04:40.418 "write": true, 00:04:40.418 "unmap": true, 00:04:40.418 "flush": true, 00:04:40.418 "reset": true, 00:04:40.418 "nvme_admin": false, 00:04:40.418 "nvme_io": false, 00:04:40.418 "nvme_io_md": false, 00:04:40.418 "write_zeroes": true, 00:04:40.418 "zcopy": true, 00:04:40.418 "get_zone_info": false, 00:04:40.418 "zone_management": false, 00:04:40.418 "zone_append": false, 00:04:40.418 "compare": false, 00:04:40.418 "compare_and_write": false, 00:04:40.418 "abort": true, 00:04:40.418 "seek_hole": false, 00:04:40.418 "seek_data": false, 00:04:40.418 "copy": true, 00:04:40.418 "nvme_iov_md": false 00:04:40.418 }, 00:04:40.418 "memory_domains": [ 00:04:40.418 { 00:04:40.418 "dma_device_id": "system", 00:04:40.418 "dma_device_type": 1 00:04:40.418 }, 00:04:40.418 { 00:04:40.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.418 "dma_device_type": 2 00:04:40.418 } 00:04:40.418 ], 00:04:40.418 "driver_specific": {} 00:04:40.418 }, 00:04:40.418 { 00:04:40.418 "name": "Passthru0", 00:04:40.418 "aliases": [ 00:04:40.418 "d860dd16-33d8-5bc9-aed2-d96ab126a15f" 00:04:40.418 ], 00:04:40.418 "product_name": "passthru", 00:04:40.418 "block_size": 512, 00:04:40.418 "num_blocks": 16384, 00:04:40.418 "uuid": "d860dd16-33d8-5bc9-aed2-d96ab126a15f", 00:04:40.418 "assigned_rate_limits": { 00:04:40.418 "rw_ios_per_sec": 0, 00:04:40.418 "rw_mbytes_per_sec": 0, 00:04:40.418 "r_mbytes_per_sec": 0, 00:04:40.418 "w_mbytes_per_sec": 0 00:04:40.418 }, 00:04:40.418 "claimed": false, 00:04:40.418 "zoned": false, 00:04:40.418 "supported_io_types": { 00:04:40.418 "read": true, 00:04:40.418 "write": true, 00:04:40.418 "unmap": true, 00:04:40.418 "flush": true, 00:04:40.418 "reset": true, 00:04:40.418 "nvme_admin": false, 00:04:40.418 "nvme_io": false, 00:04:40.418 "nvme_io_md": false, 00:04:40.418 "write_zeroes": true, 00:04:40.418 "zcopy": true, 00:04:40.418 "get_zone_info": false, 00:04:40.418 "zone_management": false, 00:04:40.418 "zone_append": false, 00:04:40.418 "compare": false, 00:04:40.418 "compare_and_write": false, 00:04:40.418 "abort": true, 00:04:40.418 "seek_hole": false, 00:04:40.418 "seek_data": false, 00:04:40.418 "copy": true, 00:04:40.418 "nvme_iov_md": false 00:04:40.418 }, 00:04:40.418 "memory_domains": [ 00:04:40.418 { 00:04:40.418 "dma_device_id": "system", 00:04:40.418 "dma_device_type": 1 00:04:40.418 }, 00:04:40.418 { 00:04:40.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.418 "dma_device_type": 2 00:04:40.418 } 00:04:40.418 ], 00:04:40.418 "driver_specific": { 00:04:40.418 "passthru": { 00:04:40.418 "name": "Passthru0", 00:04:40.418 "base_bdev_name": "Malloc2" 00:04:40.418 } 00:04:40.418 } 00:04:40.418 } 00:04:40.418 ]' 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.418 ************************************ 00:04:40.418 END TEST rpc_daemon_integrity 00:04:40.418 ************************************ 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.418 00:04:40.418 real 0m0.233s 00:04:40.418 user 0m0.122s 00:04:40.418 sys 0m0.032s 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.418 09:17:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.418 09:17:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.418 09:17:05 rpc -- rpc/rpc.sh@84 -- # killprocess 57153 00:04:40.418 09:17:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 57153 ']' 00:04:40.418 09:17:05 rpc -- common/autotest_common.sh@958 -- # kill -0 57153 00:04:40.418 09:17:05 rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.418 09:17:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.418 09:17:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57153 00:04:40.676 killing process with pid 57153 00:04:40.676 09:17:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.676 09:17:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.676 09:17:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57153' 00:04:40.676 09:17:05 rpc -- common/autotest_common.sh@973 -- # kill 57153 00:04:40.676 09:17:05 rpc -- common/autotest_common.sh@978 -- # wait 57153 00:04:42.049 00:04:42.049 real 0m3.543s 00:04:42.049 user 0m3.943s 00:04:42.049 sys 0m0.615s 00:04:42.049 ************************************ 00:04:42.049 END TEST rpc 00:04:42.049 ************************************ 00:04:42.049 09:17:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.049 09:17:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.049 09:17:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:42.049 09:17:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.049 09:17:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.049 09:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:42.049 ************************************ 00:04:42.049 START TEST skip_rpc 00:04:42.049 ************************************ 00:04:42.049 09:17:07 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:42.049 * Looking for test storage... 00:04:42.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.049 09:17:07 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.049 09:17:07 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.049 09:17:07 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.306 09:17:07 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.306 09:17:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.306 09:17:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.307 09:17:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.307 --rc genhtml_branch_coverage=1 00:04:42.307 --rc genhtml_function_coverage=1 00:04:42.307 --rc genhtml_legend=1 00:04:42.307 --rc geninfo_all_blocks=1 00:04:42.307 --rc geninfo_unexecuted_blocks=1 00:04:42.307 00:04:42.307 ' 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.307 --rc genhtml_branch_coverage=1 00:04:42.307 --rc genhtml_function_coverage=1 00:04:42.307 --rc genhtml_legend=1 00:04:42.307 --rc geninfo_all_blocks=1 00:04:42.307 --rc geninfo_unexecuted_blocks=1 00:04:42.307 00:04:42.307 ' 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.307 --rc genhtml_branch_coverage=1 00:04:42.307 --rc genhtml_function_coverage=1 00:04:42.307 --rc genhtml_legend=1 00:04:42.307 --rc geninfo_all_blocks=1 00:04:42.307 --rc geninfo_unexecuted_blocks=1 00:04:42.307 00:04:42.307 ' 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.307 --rc genhtml_branch_coverage=1 00:04:42.307 --rc genhtml_function_coverage=1 00:04:42.307 --rc genhtml_legend=1 00:04:42.307 --rc geninfo_all_blocks=1 00:04:42.307 --rc geninfo_unexecuted_blocks=1 00:04:42.307 00:04:42.307 ' 00:04:42.307 09:17:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.307 09:17:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.307 09:17:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.307 09:17:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.307 ************************************ 00:04:42.307 START TEST skip_rpc 00:04:42.307 ************************************ 00:04:42.307 09:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:42.307 09:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57366 00:04:42.307 09:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.307 09:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:42.307 09:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:42.307 [2024-11-20 09:17:07.657826] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:42.307 [2024-11-20 09:17:07.657943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:04:42.563 [2024-11-20 09:17:07.820029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.563 [2024-11-20 09:17:07.917289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57366 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57366 ']' 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57366 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57366 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.840 killing process with pid 57366 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57366' 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57366 00:04:47.840 09:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57366 00:04:48.406 00:04:48.406 real 0m6.191s 00:04:48.406 user 0m5.820s 00:04:48.406 sys 0m0.268s 00:04:48.406 09:17:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.406 09:17:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.406 ************************************ 00:04:48.406 END TEST skip_rpc 00:04:48.406 ************************************ 00:04:48.406 09:17:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:48.406 09:17:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.406 09:17:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.406 09:17:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.406 ************************************ 00:04:48.406 START TEST skip_rpc_with_json 00:04:48.406 ************************************ 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57463 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57463 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57463 ']' 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.406 09:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.663 [2024-11-20 09:17:13.887433] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:48.663 [2024-11-20 09:17:13.887548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57463 ] 00:04:48.663 [2024-11-20 09:17:14.043386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.921 [2024-11-20 09:17:14.122327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 [2024-11-20 09:17:14.719627] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.487 request: 00:04:49.487 { 00:04:49.487 "trtype": "tcp", 00:04:49.487 "method": "nvmf_get_transports", 00:04:49.487 "req_id": 1 00:04:49.487 } 00:04:49.487 Got JSON-RPC error response 00:04:49.487 response: 00:04:49.487 { 00:04:49.487 "code": -19, 00:04:49.487 "message": "No such device" 00:04:49.487 } 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 [2024-11-20 09:17:14.731716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.487 09:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.487 { 00:04:49.487 "subsystems": [ 00:04:49.487 { 00:04:49.487 "subsystem": "fsdev", 00:04:49.487 "config": [ 00:04:49.487 { 00:04:49.487 "method": "fsdev_set_opts", 00:04:49.487 "params": { 00:04:49.487 "fsdev_io_pool_size": 65535, 00:04:49.487 "fsdev_io_cache_size": 256 00:04:49.487 } 00:04:49.487 } 00:04:49.487 ] 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "subsystem": "keyring", 00:04:49.487 "config": [] 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "subsystem": "iobuf", 00:04:49.487 "config": [ 00:04:49.487 { 00:04:49.487 "method": "iobuf_set_options", 00:04:49.487 "params": { 00:04:49.487 "small_pool_count": 8192, 00:04:49.487 "large_pool_count": 1024, 00:04:49.487 "small_bufsize": 8192, 00:04:49.487 "large_bufsize": 135168, 00:04:49.487 "enable_numa": false 00:04:49.487 } 00:04:49.487 } 00:04:49.487 ] 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "subsystem": "sock", 00:04:49.487 "config": [ 00:04:49.487 { 00:04:49.487 "method": "sock_set_default_impl", 00:04:49.487 "params": { 00:04:49.487 "impl_name": "posix" 00:04:49.487 } 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "method": "sock_impl_set_options", 00:04:49.487 "params": { 00:04:49.487 "impl_name": "ssl", 00:04:49.487 "recv_buf_size": 4096, 00:04:49.487 "send_buf_size": 4096, 00:04:49.487 "enable_recv_pipe": true, 00:04:49.487 "enable_quickack": false, 00:04:49.487 "enable_placement_id": 0, 00:04:49.487 "enable_zerocopy_send_server": true, 00:04:49.487 "enable_zerocopy_send_client": false, 00:04:49.487 "zerocopy_threshold": 0, 00:04:49.487 "tls_version": 0, 00:04:49.487 "enable_ktls": false 00:04:49.487 } 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "method": "sock_impl_set_options", 00:04:49.487 "params": { 00:04:49.487 "impl_name": "posix", 00:04:49.487 "recv_buf_size": 2097152, 00:04:49.487 "send_buf_size": 2097152, 00:04:49.487 "enable_recv_pipe": true, 00:04:49.487 "enable_quickack": false, 00:04:49.487 "enable_placement_id": 0, 00:04:49.487 "enable_zerocopy_send_server": true, 00:04:49.487 "enable_zerocopy_send_client": false, 00:04:49.487 "zerocopy_threshold": 0, 00:04:49.487 "tls_version": 0, 00:04:49.487 "enable_ktls": false 00:04:49.487 } 00:04:49.487 } 00:04:49.487 ] 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "subsystem": "vmd", 00:04:49.487 "config": [] 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "subsystem": "accel", 00:04:49.487 "config": [ 00:04:49.487 { 00:04:49.487 "method": "accel_set_options", 00:04:49.487 "params": { 00:04:49.487 "small_cache_size": 128, 00:04:49.487 "large_cache_size": 16, 00:04:49.487 "task_count": 2048, 00:04:49.487 "sequence_count": 2048, 00:04:49.487 "buf_count": 2048 00:04:49.487 } 00:04:49.487 } 00:04:49.487 ] 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "subsystem": "bdev", 00:04:49.487 "config": [ 00:04:49.487 { 00:04:49.487 "method": "bdev_set_options", 00:04:49.487 "params": { 00:04:49.487 "bdev_io_pool_size": 65535, 00:04:49.487 "bdev_io_cache_size": 256, 00:04:49.487 "bdev_auto_examine": true, 00:04:49.487 "iobuf_small_cache_size": 128, 00:04:49.487 "iobuf_large_cache_size": 16 00:04:49.487 } 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "method": "bdev_raid_set_options", 00:04:49.487 "params": { 00:04:49.487 "process_window_size_kb": 1024, 00:04:49.487 "process_max_bandwidth_mb_sec": 0 00:04:49.487 } 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "method": "bdev_iscsi_set_options", 00:04:49.487 "params": { 00:04:49.487 "timeout_sec": 30 00:04:49.487 } 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "method": "bdev_nvme_set_options", 00:04:49.487 "params": { 00:04:49.487 "action_on_timeout": "none", 00:04:49.487 "timeout_us": 0, 00:04:49.487 "timeout_admin_us": 0, 00:04:49.487 "keep_alive_timeout_ms": 10000, 00:04:49.487 "arbitration_burst": 0, 00:04:49.487 "low_priority_weight": 0, 00:04:49.487 "medium_priority_weight": 0, 00:04:49.487 "high_priority_weight": 0, 00:04:49.487 "nvme_adminq_poll_period_us": 10000, 00:04:49.487 "nvme_ioq_poll_period_us": 0, 00:04:49.487 "io_queue_requests": 0, 00:04:49.487 "delay_cmd_submit": true, 00:04:49.487 "transport_retry_count": 4, 00:04:49.487 "bdev_retry_count": 3, 00:04:49.487 "transport_ack_timeout": 0, 00:04:49.487 "ctrlr_loss_timeout_sec": 0, 00:04:49.487 "reconnect_delay_sec": 0, 00:04:49.487 "fast_io_fail_timeout_sec": 0, 00:04:49.487 "disable_auto_failback": false, 00:04:49.487 "generate_uuids": false, 00:04:49.487 "transport_tos": 0, 00:04:49.487 "nvme_error_stat": false, 00:04:49.487 "rdma_srq_size": 0, 00:04:49.487 "io_path_stat": false, 00:04:49.487 "allow_accel_sequence": false, 00:04:49.487 "rdma_max_cq_size": 0, 00:04:49.487 "rdma_cm_event_timeout_ms": 0, 00:04:49.487 "dhchap_digests": [ 00:04:49.487 "sha256", 00:04:49.487 "sha384", 00:04:49.487 "sha512" 00:04:49.487 ], 00:04:49.487 "dhchap_dhgroups": [ 00:04:49.487 "null", 00:04:49.487 "ffdhe2048", 00:04:49.487 "ffdhe3072", 00:04:49.487 "ffdhe4096", 00:04:49.487 "ffdhe6144", 00:04:49.487 "ffdhe8192" 00:04:49.487 ] 00:04:49.487 } 00:04:49.487 }, 00:04:49.487 { 00:04:49.487 "method": "bdev_nvme_set_hotplug", 00:04:49.487 "params": { 00:04:49.487 "period_us": 100000, 00:04:49.487 "enable": false 00:04:49.488 } 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "method": "bdev_wait_for_examine" 00:04:49.488 } 00:04:49.488 ] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "scsi", 00:04:49.488 "config": null 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "scheduler", 00:04:49.488 "config": [ 00:04:49.488 { 00:04:49.488 "method": "framework_set_scheduler", 00:04:49.488 "params": { 00:04:49.488 "name": "static" 00:04:49.488 } 00:04:49.488 } 00:04:49.488 ] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "vhost_scsi", 00:04:49.488 "config": [] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "vhost_blk", 00:04:49.488 "config": [] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "ublk", 00:04:49.488 "config": [] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "nbd", 00:04:49.488 "config": [] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "nvmf", 00:04:49.488 "config": [ 00:04:49.488 { 00:04:49.488 "method": "nvmf_set_config", 00:04:49.488 "params": { 00:04:49.488 "discovery_filter": "match_any", 00:04:49.488 "admin_cmd_passthru": { 00:04:49.488 "identify_ctrlr": false 00:04:49.488 }, 00:04:49.488 "dhchap_digests": [ 00:04:49.488 "sha256", 00:04:49.488 "sha384", 00:04:49.488 "sha512" 00:04:49.488 ], 00:04:49.488 "dhchap_dhgroups": [ 00:04:49.488 "null", 00:04:49.488 "ffdhe2048", 00:04:49.488 "ffdhe3072", 00:04:49.488 "ffdhe4096", 00:04:49.488 "ffdhe6144", 00:04:49.488 "ffdhe8192" 00:04:49.488 ] 00:04:49.488 } 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "method": "nvmf_set_max_subsystems", 00:04:49.488 "params": { 00:04:49.488 "max_subsystems": 1024 00:04:49.488 } 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "method": "nvmf_set_crdt", 00:04:49.488 "params": { 00:04:49.488 "crdt1": 0, 00:04:49.488 "crdt2": 0, 00:04:49.488 "crdt3": 0 00:04:49.488 } 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "method": "nvmf_create_transport", 00:04:49.488 "params": { 00:04:49.488 "trtype": "TCP", 00:04:49.488 "max_queue_depth": 128, 00:04:49.488 "max_io_qpairs_per_ctrlr": 127, 00:04:49.488 "in_capsule_data_size": 4096, 00:04:49.488 "max_io_size": 131072, 00:04:49.488 "io_unit_size": 131072, 00:04:49.488 "max_aq_depth": 128, 00:04:49.488 "num_shared_buffers": 511, 00:04:49.488 "buf_cache_size": 4294967295, 00:04:49.488 "dif_insert_or_strip": false, 00:04:49.488 "zcopy": false, 00:04:49.488 "c2h_success": true, 00:04:49.488 "sock_priority": 0, 00:04:49.488 "abort_timeout_sec": 1, 00:04:49.488 "ack_timeout": 0, 00:04:49.488 "data_wr_pool_size": 0 00:04:49.488 } 00:04:49.488 } 00:04:49.488 ] 00:04:49.488 }, 00:04:49.488 { 00:04:49.488 "subsystem": "iscsi", 00:04:49.488 "config": [ 00:04:49.488 { 00:04:49.488 "method": "iscsi_set_options", 00:04:49.488 "params": { 00:04:49.488 "node_base": "iqn.2016-06.io.spdk", 00:04:49.488 "max_sessions": 128, 00:04:49.488 "max_connections_per_session": 2, 00:04:49.488 "max_queue_depth": 64, 00:04:49.488 "default_time2wait": 2, 00:04:49.488 "default_time2retain": 20, 00:04:49.488 "first_burst_length": 8192, 00:04:49.488 "immediate_data": true, 00:04:49.488 "allow_duplicated_isid": false, 00:04:49.488 "error_recovery_level": 0, 00:04:49.488 "nop_timeout": 60, 00:04:49.488 "nop_in_interval": 30, 00:04:49.488 "disable_chap": false, 00:04:49.488 "require_chap": false, 00:04:49.488 "mutual_chap": false, 00:04:49.488 "chap_group": 0, 00:04:49.488 "max_large_datain_per_connection": 64, 00:04:49.488 "max_r2t_per_connection": 4, 00:04:49.488 "pdu_pool_size": 36864, 00:04:49.488 "immediate_data_pool_size": 16384, 00:04:49.488 "data_out_pool_size": 2048 00:04:49.488 } 00:04:49.488 } 00:04:49.488 ] 00:04:49.488 } 00:04:49.488 ] 00:04:49.488 } 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57463 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57463 ']' 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57463 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57463 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.488 killing process with pid 57463 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57463' 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57463 00:04:49.488 09:17:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57463 00:04:50.862 09:17:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57498 00:04:50.862 09:17:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:50.862 09:17:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57498 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57498 ']' 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57498 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57498 00:04:56.135 killing process with pid 57498 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57498' 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57498 00:04:56.135 09:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57498 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.070 00:04:57.070 real 0m8.451s 00:04:57.070 user 0m8.118s 00:04:57.070 sys 0m0.555s 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.070 ************************************ 00:04:57.070 END TEST skip_rpc_with_json 00:04:57.070 ************************************ 00:04:57.070 09:17:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.070 09:17:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.070 09:17:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.070 09:17:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.070 ************************************ 00:04:57.070 START TEST skip_rpc_with_delay 00:04:57.070 ************************************ 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.070 [2024-11-20 09:17:22.408721] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.070 00:04:57.070 real 0m0.163s 00:04:57.070 user 0m0.093s 00:04:57.070 sys 0m0.069s 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.070 09:17:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.070 ************************************ 00:04:57.070 END TEST skip_rpc_with_delay 00:04:57.070 ************************************ 00:04:57.070 09:17:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.070 09:17:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.070 09:17:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.070 09:17:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.070 09:17:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.070 09:17:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.070 ************************************ 00:04:57.070 START TEST exit_on_failed_rpc_init 00:04:57.070 ************************************ 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57615 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57615 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57615 ']' 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.070 09:17:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.328 [2024-11-20 09:17:22.580453] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:57.328 [2024-11-20 09:17:22.580574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57615 ] 00:04:57.328 [2024-11-20 09:17:22.741046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.585 [2024-11-20 09:17:22.837999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.152 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.152 [2024-11-20 09:17:23.515826] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:58.152 [2024-11-20 09:17:23.515941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57633 ] 00:04:58.410 [2024-11-20 09:17:23.673226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.410 [2024-11-20 09:17:23.769082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.410 [2024-11-20 09:17:23.769167] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.410 [2024-11-20 09:17:23.769180] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.410 [2024-11-20 09:17:23.769192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57615 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57615 ']' 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57615 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57615 00:04:58.688 killing process with pid 57615 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57615' 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57615 00:04:58.688 09:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57615 00:05:00.060 00:05:00.060 real 0m2.815s 00:05:00.060 user 0m3.133s 00:05:00.060 sys 0m0.412s 00:05:00.060 09:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.060 09:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.060 ************************************ 00:05:00.060 END TEST exit_on_failed_rpc_init 00:05:00.060 ************************************ 00:05:00.060 09:17:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.061 ************************************ 00:05:00.061 END TEST skip_rpc 00:05:00.061 ************************************ 00:05:00.061 00:05:00.061 real 0m17.934s 00:05:00.061 user 0m17.295s 00:05:00.061 sys 0m1.481s 00:05:00.061 09:17:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.061 09:17:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.061 09:17:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.061 09:17:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.061 09:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.061 09:17:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.061 ************************************ 00:05:00.061 START TEST rpc_client 00:05:00.061 ************************************ 00:05:00.061 09:17:25 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.061 * Looking for test storage... 00:05:00.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:00.061 09:17:25 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.061 09:17:25 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.061 09:17:25 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.061 09:17:25 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.061 09:17:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.061 09:17:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.061 09:17:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.061 09:17:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.061 09:17:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.319 09:17:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.319 --rc genhtml_branch_coverage=1 00:05:00.319 --rc genhtml_function_coverage=1 00:05:00.319 --rc genhtml_legend=1 00:05:00.319 --rc geninfo_all_blocks=1 00:05:00.319 --rc geninfo_unexecuted_blocks=1 00:05:00.319 00:05:00.319 ' 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.319 --rc genhtml_branch_coverage=1 00:05:00.319 --rc genhtml_function_coverage=1 00:05:00.319 --rc genhtml_legend=1 00:05:00.319 --rc geninfo_all_blocks=1 00:05:00.319 --rc geninfo_unexecuted_blocks=1 00:05:00.319 00:05:00.319 ' 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.319 --rc genhtml_branch_coverage=1 00:05:00.319 --rc genhtml_function_coverage=1 00:05:00.319 --rc genhtml_legend=1 00:05:00.319 --rc geninfo_all_blocks=1 00:05:00.319 --rc geninfo_unexecuted_blocks=1 00:05:00.319 00:05:00.319 ' 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.319 --rc genhtml_branch_coverage=1 00:05:00.319 --rc genhtml_function_coverage=1 00:05:00.319 --rc genhtml_legend=1 00:05:00.319 --rc geninfo_all_blocks=1 00:05:00.319 --rc geninfo_unexecuted_blocks=1 00:05:00.319 00:05:00.319 ' 00:05:00.319 09:17:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:00.319 OK 00:05:00.319 09:17:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.319 ************************************ 00:05:00.319 END TEST rpc_client 00:05:00.319 ************************************ 00:05:00.319 00:05:00.319 real 0m0.189s 00:05:00.319 user 0m0.105s 00:05:00.319 sys 0m0.087s 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.319 09:17:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 09:17:25 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.320 09:17:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.320 09:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.320 09:17:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 ************************************ 00:05:00.320 START TEST json_config 00:05:00.320 ************************************ 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.320 09:17:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.320 09:17:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.320 09:17:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.320 09:17:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.320 09:17:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.320 09:17:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:00.320 09:17:25 json_config -- scripts/common.sh@345 -- # : 1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.320 09:17:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.320 09:17:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@353 -- # local d=1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.320 09:17:25 json_config -- scripts/common.sh@355 -- # echo 1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.320 09:17:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@353 -- # local d=2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.320 09:17:25 json_config -- scripts/common.sh@355 -- # echo 2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.320 09:17:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.320 09:17:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.320 09:17:25 json_config -- scripts/common.sh@368 -- # return 0 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.320 --rc genhtml_branch_coverage=1 00:05:00.320 --rc genhtml_function_coverage=1 00:05:00.320 --rc genhtml_legend=1 00:05:00.320 --rc geninfo_all_blocks=1 00:05:00.320 --rc geninfo_unexecuted_blocks=1 00:05:00.320 00:05:00.320 ' 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.320 --rc genhtml_branch_coverage=1 00:05:00.320 --rc genhtml_function_coverage=1 00:05:00.320 --rc genhtml_legend=1 00:05:00.320 --rc geninfo_all_blocks=1 00:05:00.320 --rc geninfo_unexecuted_blocks=1 00:05:00.320 00:05:00.320 ' 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.320 --rc genhtml_branch_coverage=1 00:05:00.320 --rc genhtml_function_coverage=1 00:05:00.320 --rc genhtml_legend=1 00:05:00.320 --rc geninfo_all_blocks=1 00:05:00.320 --rc geninfo_unexecuted_blocks=1 00:05:00.320 00:05:00.320 ' 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.320 --rc genhtml_branch_coverage=1 00:05:00.320 --rc genhtml_function_coverage=1 00:05:00.320 --rc genhtml_legend=1 00:05:00.320 --rc geninfo_all_blocks=1 00:05:00.320 --rc geninfo_unexecuted_blocks=1 00:05:00.320 00:05:00.320 ' 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:356852dd-0bfa-4a3f-a9a5-1dc974ab9a08 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=356852dd-0bfa-4a3f-a9a5-1dc974ab9a08 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.320 09:17:25 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.320 09:17:25 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.320 09:17:25 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.320 09:17:25 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.320 09:17:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.320 09:17:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.320 09:17:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.320 09:17:25 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.320 09:17:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:05:00.320 09:17:25 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:00.320 09:17:25 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:00.320 09:17:25 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@50 -- # : 0 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:00.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:00.320 09:17:25 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:00.320 WARNING: No tests are enabled so not running JSON configuration tests 00:05:00.320 09:17:25 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:00.320 ************************************ 00:05:00.320 END TEST json_config 00:05:00.320 ************************************ 00:05:00.320 00:05:00.320 real 0m0.128s 00:05:00.320 user 0m0.084s 00:05:00.320 sys 0m0.048s 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.320 09:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 09:17:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:00.320 09:17:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.320 09:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.320 09:17:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 ************************************ 00:05:00.320 START TEST json_config_extra_key 00:05:00.320 ************************************ 00:05:00.321 09:17:25 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:00.579 09:17:25 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.579 09:17:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.579 09:17:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.579 09:17:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:00.580 09:17:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.580 09:17:25 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 09:17:25 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 09:17:25 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 09:17:25 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:356852dd-0bfa-4a3f-a9a5-1dc974ab9a08 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=356852dd-0bfa-4a3f-a9a5-1dc974ab9a08 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.580 09:17:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.580 09:17:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.580 09:17:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.580 09:17:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.580 09:17:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:00.580 09:17:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:00.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:00.580 09:17:25 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.580 INFO: launching applications... 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:00.580 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:00.580 09:17:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:00.580 09:17:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:00.580 09:17:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.580 09:17:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.580 09:17:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.580 09:17:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.581 09:17:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.581 09:17:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57827 00:05:00.581 Waiting for target to run... 00:05:00.581 09:17:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.581 09:17:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57827 /var/tmp/spdk_tgt.sock 00:05:00.581 09:17:25 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57827 ']' 00:05:00.581 09:17:25 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.581 09:17:25 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.581 09:17:25 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.581 09:17:25 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.581 09:17:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.581 09:17:25 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:00.581 [2024-11-20 09:17:26.004697] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:00.581 [2024-11-20 09:17:26.004817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57827 ] 00:05:01.146 [2024-11-20 09:17:26.315501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.146 [2024-11-20 09:17:26.406285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.711 09:17:26 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.711 09:17:26 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:01.711 00:05:01.711 09:17:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:01.711 INFO: shutting down applications... 00:05:01.711 09:17:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:01.711 09:17:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:01.711 09:17:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57827 ]] 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57827 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:01.712 09:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.970 09:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.970 09:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.970 09:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:01.970 09:17:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.536 09:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.536 09:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.536 09:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:02.536 09:17:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.105 09:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.105 09:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.105 09:17:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:03.105 09:17:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57827 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.674 SPDK target shutdown done 00:05:03.674 09:17:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.674 Success 00:05:03.674 09:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.674 ************************************ 00:05:03.674 END TEST json_config_extra_key 00:05:03.674 ************************************ 00:05:03.674 00:05:03.674 real 0m3.149s 00:05:03.674 user 0m2.725s 00:05:03.674 sys 0m0.381s 00:05:03.674 09:17:28 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.674 09:17:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.674 09:17:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.674 09:17:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.674 09:17:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.674 09:17:28 -- common/autotest_common.sh@10 -- # set +x 00:05:03.674 ************************************ 00:05:03.674 START TEST alias_rpc 00:05:03.674 ************************************ 00:05:03.674 09:17:28 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.674 * Looking for test storage... 00:05:03.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.674 09:17:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.674 --rc genhtml_branch_coverage=1 00:05:03.674 --rc genhtml_function_coverage=1 00:05:03.674 --rc genhtml_legend=1 00:05:03.674 --rc geninfo_all_blocks=1 00:05:03.674 --rc geninfo_unexecuted_blocks=1 00:05:03.674 00:05:03.674 ' 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.674 --rc genhtml_branch_coverage=1 00:05:03.674 --rc genhtml_function_coverage=1 00:05:03.674 --rc genhtml_legend=1 00:05:03.674 --rc geninfo_all_blocks=1 00:05:03.674 --rc geninfo_unexecuted_blocks=1 00:05:03.674 00:05:03.674 ' 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.674 --rc genhtml_branch_coverage=1 00:05:03.674 --rc genhtml_function_coverage=1 00:05:03.674 --rc genhtml_legend=1 00:05:03.674 --rc geninfo_all_blocks=1 00:05:03.674 --rc geninfo_unexecuted_blocks=1 00:05:03.674 00:05:03.674 ' 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.674 --rc genhtml_branch_coverage=1 00:05:03.674 --rc genhtml_function_coverage=1 00:05:03.674 --rc genhtml_legend=1 00:05:03.674 --rc geninfo_all_blocks=1 00:05:03.674 --rc geninfo_unexecuted_blocks=1 00:05:03.674 00:05:03.674 ' 00:05:03.674 09:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:03.674 09:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57920 00:05:03.674 09:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57920 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57920 ']' 00:05:03.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.674 09:17:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.674 09:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.935 [2024-11-20 09:17:29.176040] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:03.935 [2024-11-20 09:17:29.176161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:05:03.935 [2024-11-20 09:17:29.335786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.193 [2024-11-20 09:17:29.431735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.759 09:17:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.759 09:17:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.759 09:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:05.029 09:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57920 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57920 ']' 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57920 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57920 00:05:05.029 killing process with pid 57920 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57920' 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 57920 00:05:05.029 09:17:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 57920 00:05:06.442 ************************************ 00:05:06.442 END TEST alias_rpc 00:05:06.442 ************************************ 00:05:06.442 00:05:06.442 real 0m2.800s 00:05:06.442 user 0m2.894s 00:05:06.442 sys 0m0.404s 00:05:06.442 09:17:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.442 09:17:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.442 09:17:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:06.442 09:17:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:06.442 09:17:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.442 09:17:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.442 09:17:31 -- common/autotest_common.sh@10 -- # set +x 00:05:06.442 ************************************ 00:05:06.442 START TEST spdkcli_tcp 00:05:06.442 ************************************ 00:05:06.442 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:06.442 * Looking for test storage... 00:05:06.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:06.442 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.702 09:17:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.702 --rc genhtml_branch_coverage=1 00:05:06.702 --rc genhtml_function_coverage=1 00:05:06.702 --rc genhtml_legend=1 00:05:06.702 --rc geninfo_all_blocks=1 00:05:06.702 --rc geninfo_unexecuted_blocks=1 00:05:06.702 00:05:06.702 ' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.702 --rc genhtml_branch_coverage=1 00:05:06.702 --rc genhtml_function_coverage=1 00:05:06.702 --rc genhtml_legend=1 00:05:06.702 --rc geninfo_all_blocks=1 00:05:06.702 --rc geninfo_unexecuted_blocks=1 00:05:06.702 00:05:06.702 ' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.702 --rc genhtml_branch_coverage=1 00:05:06.702 --rc genhtml_function_coverage=1 00:05:06.702 --rc genhtml_legend=1 00:05:06.702 --rc geninfo_all_blocks=1 00:05:06.702 --rc geninfo_unexecuted_blocks=1 00:05:06.702 00:05:06.702 ' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.702 --rc genhtml_branch_coverage=1 00:05:06.702 --rc genhtml_function_coverage=1 00:05:06.702 --rc genhtml_legend=1 00:05:06.702 --rc geninfo_all_blocks=1 00:05:06.702 --rc geninfo_unexecuted_blocks=1 00:05:06.702 00:05:06.702 ' 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58016 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58016 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58016 ']' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.702 09:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.702 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:06.702 [2024-11-20 09:17:32.049536] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:06.702 [2024-11-20 09:17:32.049650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58016 ] 00:05:06.961 [2024-11-20 09:17:32.209863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.961 [2024-11-20 09:17:32.308329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.961 [2024-11-20 09:17:32.308331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.525 09:17:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.525 09:17:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:07.525 09:17:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:07.525 09:17:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58033 00:05:07.525 09:17:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:07.784 [ 00:05:07.784 "bdev_malloc_delete", 00:05:07.784 "bdev_malloc_create", 00:05:07.784 "bdev_null_resize", 00:05:07.784 "bdev_null_delete", 00:05:07.784 "bdev_null_create", 00:05:07.784 "bdev_nvme_cuse_unregister", 00:05:07.784 "bdev_nvme_cuse_register", 00:05:07.784 "bdev_opal_new_user", 00:05:07.784 "bdev_opal_set_lock_state", 00:05:07.784 "bdev_opal_delete", 00:05:07.784 "bdev_opal_get_info", 00:05:07.784 "bdev_opal_create", 00:05:07.784 "bdev_nvme_opal_revert", 00:05:07.784 "bdev_nvme_opal_init", 00:05:07.784 "bdev_nvme_send_cmd", 00:05:07.784 "bdev_nvme_set_keys", 00:05:07.784 "bdev_nvme_get_path_iostat", 00:05:07.784 "bdev_nvme_get_mdns_discovery_info", 00:05:07.784 "bdev_nvme_stop_mdns_discovery", 00:05:07.784 "bdev_nvme_start_mdns_discovery", 00:05:07.784 "bdev_nvme_set_multipath_policy", 00:05:07.784 "bdev_nvme_set_preferred_path", 00:05:07.784 "bdev_nvme_get_io_paths", 00:05:07.784 "bdev_nvme_remove_error_injection", 00:05:07.784 "bdev_nvme_add_error_injection", 00:05:07.784 "bdev_nvme_get_discovery_info", 00:05:07.784 "bdev_nvme_stop_discovery", 00:05:07.784 "bdev_nvme_start_discovery", 00:05:07.784 "bdev_nvme_get_controller_health_info", 00:05:07.784 "bdev_nvme_disable_controller", 00:05:07.784 "bdev_nvme_enable_controller", 00:05:07.784 "bdev_nvme_reset_controller", 00:05:07.784 "bdev_nvme_get_transport_statistics", 00:05:07.784 "bdev_nvme_apply_firmware", 00:05:07.784 "bdev_nvme_detach_controller", 00:05:07.784 "bdev_nvme_get_controllers", 00:05:07.784 "bdev_nvme_attach_controller", 00:05:07.784 "bdev_nvme_set_hotplug", 00:05:07.784 "bdev_nvme_set_options", 00:05:07.784 "bdev_passthru_delete", 00:05:07.784 "bdev_passthru_create", 00:05:07.784 "bdev_lvol_set_parent_bdev", 00:05:07.785 "bdev_lvol_set_parent", 00:05:07.785 "bdev_lvol_check_shallow_copy", 00:05:07.785 "bdev_lvol_start_shallow_copy", 00:05:07.785 "bdev_lvol_grow_lvstore", 00:05:07.785 "bdev_lvol_get_lvols", 00:05:07.785 "bdev_lvol_get_lvstores", 00:05:07.785 "bdev_lvol_delete", 00:05:07.785 "bdev_lvol_set_read_only", 00:05:07.785 "bdev_lvol_resize", 00:05:07.785 "bdev_lvol_decouple_parent", 00:05:07.785 "bdev_lvol_inflate", 00:05:07.785 "bdev_lvol_rename", 00:05:07.785 "bdev_lvol_clone_bdev", 00:05:07.785 "bdev_lvol_clone", 00:05:07.785 "bdev_lvol_snapshot", 00:05:07.785 "bdev_lvol_create", 00:05:07.785 "bdev_lvol_delete_lvstore", 00:05:07.785 "bdev_lvol_rename_lvstore", 00:05:07.785 "bdev_lvol_create_lvstore", 00:05:07.785 "bdev_raid_set_options", 00:05:07.785 "bdev_raid_remove_base_bdev", 00:05:07.785 "bdev_raid_add_base_bdev", 00:05:07.785 "bdev_raid_delete", 00:05:07.785 "bdev_raid_create", 00:05:07.785 "bdev_raid_get_bdevs", 00:05:07.785 "bdev_error_inject_error", 00:05:07.785 "bdev_error_delete", 00:05:07.785 "bdev_error_create", 00:05:07.785 "bdev_split_delete", 00:05:07.785 "bdev_split_create", 00:05:07.785 "bdev_delay_delete", 00:05:07.785 "bdev_delay_create", 00:05:07.785 "bdev_delay_update_latency", 00:05:07.785 "bdev_zone_block_delete", 00:05:07.785 "bdev_zone_block_create", 00:05:07.785 "blobfs_create", 00:05:07.785 "blobfs_detect", 00:05:07.785 "blobfs_set_cache_size", 00:05:07.785 "bdev_xnvme_delete", 00:05:07.785 "bdev_xnvme_create", 00:05:07.785 "bdev_aio_delete", 00:05:07.785 "bdev_aio_rescan", 00:05:07.785 "bdev_aio_create", 00:05:07.785 "bdev_ftl_set_property", 00:05:07.785 "bdev_ftl_get_properties", 00:05:07.785 "bdev_ftl_get_stats", 00:05:07.785 "bdev_ftl_unmap", 00:05:07.785 "bdev_ftl_unload", 00:05:07.785 "bdev_ftl_delete", 00:05:07.785 "bdev_ftl_load", 00:05:07.785 "bdev_ftl_create", 00:05:07.785 "bdev_virtio_attach_controller", 00:05:07.785 "bdev_virtio_scsi_get_devices", 00:05:07.785 "bdev_virtio_detach_controller", 00:05:07.785 "bdev_virtio_blk_set_hotplug", 00:05:07.785 "bdev_iscsi_delete", 00:05:07.785 "bdev_iscsi_create", 00:05:07.785 "bdev_iscsi_set_options", 00:05:07.785 "accel_error_inject_error", 00:05:07.785 "ioat_scan_accel_module", 00:05:07.785 "dsa_scan_accel_module", 00:05:07.785 "iaa_scan_accel_module", 00:05:07.785 "keyring_file_remove_key", 00:05:07.785 "keyring_file_add_key", 00:05:07.785 "keyring_linux_set_options", 00:05:07.785 "fsdev_aio_delete", 00:05:07.785 "fsdev_aio_create", 00:05:07.785 "iscsi_get_histogram", 00:05:07.785 "iscsi_enable_histogram", 00:05:07.785 "iscsi_set_options", 00:05:07.785 "iscsi_get_auth_groups", 00:05:07.785 "iscsi_auth_group_remove_secret", 00:05:07.785 "iscsi_auth_group_add_secret", 00:05:07.785 "iscsi_delete_auth_group", 00:05:07.785 "iscsi_create_auth_group", 00:05:07.785 "iscsi_set_discovery_auth", 00:05:07.785 "iscsi_get_options", 00:05:07.785 "iscsi_target_node_request_logout", 00:05:07.785 "iscsi_target_node_set_redirect", 00:05:07.785 "iscsi_target_node_set_auth", 00:05:07.785 "iscsi_target_node_add_lun", 00:05:07.785 "iscsi_get_stats", 00:05:07.785 "iscsi_get_connections", 00:05:07.785 "iscsi_portal_group_set_auth", 00:05:07.785 "iscsi_start_portal_group", 00:05:07.785 "iscsi_delete_portal_group", 00:05:07.785 "iscsi_create_portal_group", 00:05:07.785 "iscsi_get_portal_groups", 00:05:07.785 "iscsi_delete_target_node", 00:05:07.785 "iscsi_target_node_remove_pg_ig_maps", 00:05:07.785 "iscsi_target_node_add_pg_ig_maps", 00:05:07.785 "iscsi_create_target_node", 00:05:07.785 "iscsi_get_target_nodes", 00:05:07.785 "iscsi_delete_initiator_group", 00:05:07.785 "iscsi_initiator_group_remove_initiators", 00:05:07.785 "iscsi_initiator_group_add_initiators", 00:05:07.785 "iscsi_create_initiator_group", 00:05:07.785 "iscsi_get_initiator_groups", 00:05:07.785 "nvmf_set_crdt", 00:05:07.785 "nvmf_set_config", 00:05:07.785 "nvmf_set_max_subsystems", 00:05:07.785 "nvmf_stop_mdns_prr", 00:05:07.785 "nvmf_publish_mdns_prr", 00:05:07.785 "nvmf_subsystem_get_listeners", 00:05:07.785 "nvmf_subsystem_get_qpairs", 00:05:07.785 "nvmf_subsystem_get_controllers", 00:05:07.785 "nvmf_get_stats", 00:05:07.785 "nvmf_get_transports", 00:05:07.785 "nvmf_create_transport", 00:05:07.785 "nvmf_get_targets", 00:05:07.785 "nvmf_delete_target", 00:05:07.785 "nvmf_create_target", 00:05:07.785 "nvmf_subsystem_allow_any_host", 00:05:07.785 "nvmf_subsystem_set_keys", 00:05:07.785 "nvmf_subsystem_remove_host", 00:05:07.785 "nvmf_subsystem_add_host", 00:05:07.785 "nvmf_ns_remove_host", 00:05:07.785 "nvmf_ns_add_host", 00:05:07.785 "nvmf_subsystem_remove_ns", 00:05:07.785 "nvmf_subsystem_set_ns_ana_group", 00:05:07.785 "nvmf_subsystem_add_ns", 00:05:07.785 "nvmf_subsystem_listener_set_ana_state", 00:05:07.785 "nvmf_discovery_get_referrals", 00:05:07.785 "nvmf_discovery_remove_referral", 00:05:07.785 "nvmf_discovery_add_referral", 00:05:07.785 "nvmf_subsystem_remove_listener", 00:05:07.785 "nvmf_subsystem_add_listener", 00:05:07.785 "nvmf_delete_subsystem", 00:05:07.785 "nvmf_create_subsystem", 00:05:07.785 "nvmf_get_subsystems", 00:05:07.785 "env_dpdk_get_mem_stats", 00:05:07.785 "nbd_get_disks", 00:05:07.785 "nbd_stop_disk", 00:05:07.785 "nbd_start_disk", 00:05:07.785 "ublk_recover_disk", 00:05:07.785 "ublk_get_disks", 00:05:07.785 "ublk_stop_disk", 00:05:07.785 "ublk_start_disk", 00:05:07.785 "ublk_destroy_target", 00:05:07.785 "ublk_create_target", 00:05:07.785 "virtio_blk_create_transport", 00:05:07.785 "virtio_blk_get_transports", 00:05:07.785 "vhost_controller_set_coalescing", 00:05:07.785 "vhost_get_controllers", 00:05:07.785 "vhost_delete_controller", 00:05:07.785 "vhost_create_blk_controller", 00:05:07.785 "vhost_scsi_controller_remove_target", 00:05:07.785 "vhost_scsi_controller_add_target", 00:05:07.785 "vhost_start_scsi_controller", 00:05:07.785 "vhost_create_scsi_controller", 00:05:07.785 "thread_set_cpumask", 00:05:07.785 "scheduler_set_options", 00:05:07.785 "framework_get_governor", 00:05:07.785 "framework_get_scheduler", 00:05:07.785 "framework_set_scheduler", 00:05:07.785 "framework_get_reactors", 00:05:07.785 "thread_get_io_channels", 00:05:07.785 "thread_get_pollers", 00:05:07.785 "thread_get_stats", 00:05:07.785 "framework_monitor_context_switch", 00:05:07.785 "spdk_kill_instance", 00:05:07.785 "log_enable_timestamps", 00:05:07.785 "log_get_flags", 00:05:07.785 "log_clear_flag", 00:05:07.785 "log_set_flag", 00:05:07.785 "log_get_level", 00:05:07.785 "log_set_level", 00:05:07.785 "log_get_print_level", 00:05:07.785 "log_set_print_level", 00:05:07.785 "framework_enable_cpumask_locks", 00:05:07.785 "framework_disable_cpumask_locks", 00:05:07.785 "framework_wait_init", 00:05:07.785 "framework_start_init", 00:05:07.785 "scsi_get_devices", 00:05:07.785 "bdev_get_histogram", 00:05:07.785 "bdev_enable_histogram", 00:05:07.785 "bdev_set_qos_limit", 00:05:07.785 "bdev_set_qd_sampling_period", 00:05:07.785 "bdev_get_bdevs", 00:05:07.785 "bdev_reset_iostat", 00:05:07.785 "bdev_get_iostat", 00:05:07.785 "bdev_examine", 00:05:07.785 "bdev_wait_for_examine", 00:05:07.785 "bdev_set_options", 00:05:07.785 "accel_get_stats", 00:05:07.785 "accel_set_options", 00:05:07.785 "accel_set_driver", 00:05:07.785 "accel_crypto_key_destroy", 00:05:07.785 "accel_crypto_keys_get", 00:05:07.785 "accel_crypto_key_create", 00:05:07.785 "accel_assign_opc", 00:05:07.785 "accel_get_module_info", 00:05:07.785 "accel_get_opc_assignments", 00:05:07.785 "vmd_rescan", 00:05:07.785 "vmd_remove_device", 00:05:07.786 "vmd_enable", 00:05:07.786 "sock_get_default_impl", 00:05:07.786 "sock_set_default_impl", 00:05:07.786 "sock_impl_set_options", 00:05:07.786 "sock_impl_get_options", 00:05:07.786 "iobuf_get_stats", 00:05:07.786 "iobuf_set_options", 00:05:07.786 "keyring_get_keys", 00:05:07.786 "framework_get_pci_devices", 00:05:07.786 "framework_get_config", 00:05:07.786 "framework_get_subsystems", 00:05:07.786 "fsdev_set_opts", 00:05:07.786 "fsdev_get_opts", 00:05:07.786 "trace_get_info", 00:05:07.786 "trace_get_tpoint_group_mask", 00:05:07.786 "trace_disable_tpoint_group", 00:05:07.786 "trace_enable_tpoint_group", 00:05:07.786 "trace_clear_tpoint_mask", 00:05:07.786 "trace_set_tpoint_mask", 00:05:07.786 "notify_get_notifications", 00:05:07.786 "notify_get_types", 00:05:07.786 "spdk_get_version", 00:05:07.786 "rpc_get_methods" 00:05:07.786 ] 00:05:07.786 09:17:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.786 09:17:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:07.786 09:17:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58016 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58016 ']' 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58016 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58016 00:05:07.786 killing process with pid 58016 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58016' 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58016 00:05:07.786 09:17:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58016 00:05:09.685 ************************************ 00:05:09.685 END TEST spdkcli_tcp 00:05:09.685 ************************************ 00:05:09.685 00:05:09.685 real 0m2.838s 00:05:09.685 user 0m5.095s 00:05:09.685 sys 0m0.413s 00:05:09.685 09:17:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.685 09:17:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 09:17:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.685 09:17:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.685 09:17:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.685 09:17:34 -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 ************************************ 00:05:09.685 START TEST dpdk_mem_utility 00:05:09.685 ************************************ 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.685 * Looking for test storage... 00:05:09.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.685 09:17:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.685 --rc genhtml_branch_coverage=1 00:05:09.685 --rc genhtml_function_coverage=1 00:05:09.685 --rc genhtml_legend=1 00:05:09.685 --rc geninfo_all_blocks=1 00:05:09.685 --rc geninfo_unexecuted_blocks=1 00:05:09.685 00:05:09.685 ' 00:05:09.685 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.685 --rc genhtml_branch_coverage=1 00:05:09.685 --rc genhtml_function_coverage=1 00:05:09.685 --rc genhtml_legend=1 00:05:09.685 --rc geninfo_all_blocks=1 00:05:09.686 --rc geninfo_unexecuted_blocks=1 00:05:09.686 00:05:09.686 ' 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.686 --rc genhtml_branch_coverage=1 00:05:09.686 --rc genhtml_function_coverage=1 00:05:09.686 --rc genhtml_legend=1 00:05:09.686 --rc geninfo_all_blocks=1 00:05:09.686 --rc geninfo_unexecuted_blocks=1 00:05:09.686 00:05:09.686 ' 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.686 --rc genhtml_branch_coverage=1 00:05:09.686 --rc genhtml_function_coverage=1 00:05:09.686 --rc genhtml_legend=1 00:05:09.686 --rc geninfo_all_blocks=1 00:05:09.686 --rc geninfo_unexecuted_blocks=1 00:05:09.686 00:05:09.686 ' 00:05:09.686 09:17:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:09.686 09:17:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58125 00:05:09.686 09:17:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58125 00:05:09.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58125 ']' 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.686 09:17:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 09:17:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.686 [2024-11-20 09:17:34.926425] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:09.686 [2024-11-20 09:17:34.926549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58125 ] 00:05:09.686 [2024-11-20 09:17:35.088458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.943 [2024-11-20 09:17:35.189855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.508 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.508 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:10.508 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:10.508 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:10.508 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.508 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.508 { 00:05:10.508 "filename": "/tmp/spdk_mem_dump.txt" 00:05:10.508 } 00:05:10.508 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.508 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:10.508 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:10.508 1 heaps totaling size 816.000000 MiB 00:05:10.508 size: 816.000000 MiB heap id: 0 00:05:10.508 end heaps---------- 00:05:10.508 9 mempools totaling size 595.772034 MiB 00:05:10.508 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:10.508 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:10.508 size: 92.545471 MiB name: bdev_io_58125 00:05:10.508 size: 50.003479 MiB name: msgpool_58125 00:05:10.508 size: 36.509338 MiB name: fsdev_io_58125 00:05:10.508 size: 21.763794 MiB name: PDU_Pool 00:05:10.508 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:10.508 size: 4.133484 MiB name: evtpool_58125 00:05:10.508 size: 0.026123 MiB name: Session_Pool 00:05:10.508 end mempools------- 00:05:10.508 6 memzones totaling size 4.142822 MiB 00:05:10.508 size: 1.000366 MiB name: RG_ring_0_58125 00:05:10.508 size: 1.000366 MiB name: RG_ring_1_58125 00:05:10.508 size: 1.000366 MiB name: RG_ring_4_58125 00:05:10.508 size: 1.000366 MiB name: RG_ring_5_58125 00:05:10.508 size: 0.125366 MiB name: RG_ring_2_58125 00:05:10.508 size: 0.015991 MiB name: RG_ring_3_58125 00:05:10.508 end memzones------- 00:05:10.508 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:10.508 heap id: 0 total size: 816.000000 MiB number of busy elements: 324 number of free elements: 18 00:05:10.508 list of free elements. size: 16.789185 MiB 00:05:10.508 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:10.508 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:10.508 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:10.508 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:10.508 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:10.508 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:10.508 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:10.508 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:10.508 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:10.508 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:10.508 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:10.508 element at address: 0x20001ac00000 with size: 0.559753 MiB 00:05:10.508 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:10.508 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:10.508 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:10.508 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:10.508 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:10.508 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:10.508 list of standard malloc elements. size: 199.289917 MiB 00:05:10.508 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:10.508 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:10.508 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:10.508 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:10.508 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:10.508 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:10.508 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:10.508 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:10.508 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:10.508 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:10.508 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:10.508 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:10.508 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:10.508 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:10.508 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:10.509 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:10.509 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:10.509 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:10.510 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:10.510 list of memzone associated elements. size: 599.920898 MiB 00:05:10.510 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:10.510 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:10.510 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:10.510 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:10.510 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:10.510 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58125_0 00:05:10.510 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:10.510 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58125_0 00:05:10.510 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:10.510 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58125_0 00:05:10.510 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:10.510 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:10.510 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:10.510 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:10.510 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:10.510 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58125_0 00:05:10.510 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:10.510 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58125 00:05:10.510 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:10.510 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58125 00:05:10.510 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:10.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:10.510 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:10.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:10.510 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:10.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:10.510 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:10.510 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:10.510 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:10.510 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58125 00:05:10.510 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:10.510 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58125 00:05:10.510 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:10.510 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58125 00:05:10.510 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:10.510 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58125 00:05:10.510 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:10.510 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58125 00:05:10.510 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:10.510 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58125 00:05:10.510 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:10.510 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:10.510 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:10.510 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:10.510 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:10.510 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:10.510 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:10.510 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58125 00:05:10.510 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:10.510 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58125 00:05:10.510 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:10.510 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:10.510 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:10.510 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:10.510 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:10.510 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58125 00:05:10.510 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:10.510 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:10.510 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:10.510 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58125 00:05:10.510 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:10.510 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58125 00:05:10.510 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:10.510 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58125 00:05:10.510 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:10.510 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:10.510 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:10.510 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58125 00:05:10.510 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58125 ']' 00:05:10.510 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58125 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58125 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.511 killing process with pid 58125 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58125' 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58125 00:05:10.511 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58125 00:05:12.419 00:05:12.419 real 0m2.669s 00:05:12.419 user 0m2.689s 00:05:12.419 sys 0m0.384s 00:05:12.419 09:17:37 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.419 ************************************ 00:05:12.419 END TEST dpdk_mem_utility 00:05:12.419 ************************************ 00:05:12.419 09:17:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.419 09:17:37 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.419 09:17:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.419 09:17:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.419 09:17:37 -- common/autotest_common.sh@10 -- # set +x 00:05:12.419 ************************************ 00:05:12.419 START TEST event 00:05:12.419 ************************************ 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.419 * Looking for test storage... 00:05:12.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.419 09:17:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.419 09:17:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.419 09:17:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.419 09:17:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.419 09:17:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.419 09:17:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.419 09:17:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.419 09:17:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.419 09:17:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.419 09:17:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.419 09:17:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.419 09:17:37 event -- scripts/common.sh@344 -- # case "$op" in 00:05:12.419 09:17:37 event -- scripts/common.sh@345 -- # : 1 00:05:12.419 09:17:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.419 09:17:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.419 09:17:37 event -- scripts/common.sh@365 -- # decimal 1 00:05:12.419 09:17:37 event -- scripts/common.sh@353 -- # local d=1 00:05:12.419 09:17:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.419 09:17:37 event -- scripts/common.sh@355 -- # echo 1 00:05:12.419 09:17:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.419 09:17:37 event -- scripts/common.sh@366 -- # decimal 2 00:05:12.419 09:17:37 event -- scripts/common.sh@353 -- # local d=2 00:05:12.419 09:17:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.419 09:17:37 event -- scripts/common.sh@355 -- # echo 2 00:05:12.419 09:17:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.419 09:17:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.419 09:17:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.419 09:17:37 event -- scripts/common.sh@368 -- # return 0 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.419 --rc genhtml_branch_coverage=1 00:05:12.419 --rc genhtml_function_coverage=1 00:05:12.419 --rc genhtml_legend=1 00:05:12.419 --rc geninfo_all_blocks=1 00:05:12.419 --rc geninfo_unexecuted_blocks=1 00:05:12.419 00:05:12.419 ' 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.419 --rc genhtml_branch_coverage=1 00:05:12.419 --rc genhtml_function_coverage=1 00:05:12.419 --rc genhtml_legend=1 00:05:12.419 --rc geninfo_all_blocks=1 00:05:12.419 --rc geninfo_unexecuted_blocks=1 00:05:12.419 00:05:12.419 ' 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.419 --rc genhtml_branch_coverage=1 00:05:12.419 --rc genhtml_function_coverage=1 00:05:12.419 --rc genhtml_legend=1 00:05:12.419 --rc geninfo_all_blocks=1 00:05:12.419 --rc geninfo_unexecuted_blocks=1 00:05:12.419 00:05:12.419 ' 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.419 --rc genhtml_branch_coverage=1 00:05:12.419 --rc genhtml_function_coverage=1 00:05:12.419 --rc genhtml_legend=1 00:05:12.419 --rc geninfo_all_blocks=1 00:05:12.419 --rc geninfo_unexecuted_blocks=1 00:05:12.419 00:05:12.419 ' 00:05:12.419 09:17:37 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:12.419 09:17:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.419 09:17:37 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:12.419 09:17:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.419 09:17:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.419 ************************************ 00:05:12.419 START TEST event_perf 00:05:12.419 ************************************ 00:05:12.419 09:17:37 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.419 Running I/O for 1 seconds...[2024-11-20 09:17:37.596413] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:12.419 [2024-11-20 09:17:37.596523] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58217 ] 00:05:12.419 [2024-11-20 09:17:37.754051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.419 Running I/O for 1 seconds...[2024-11-20 09:17:37.854468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.419 [2024-11-20 09:17:37.854296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.419 [2024-11-20 09:17:37.854484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.419 [2024-11-20 09:17:37.854409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.791 00:05:13.791 lcore 0: 198951 00:05:13.791 lcore 1: 198948 00:05:13.792 lcore 2: 198948 00:05:13.792 lcore 3: 198950 00:05:13.792 done. 00:05:13.792 00:05:13.792 real 0m1.452s 00:05:13.792 user 0m4.255s 00:05:13.792 sys 0m0.077s 00:05:13.792 09:17:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.792 09:17:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.792 ************************************ 00:05:13.792 END TEST event_perf 00:05:13.792 ************************************ 00:05:13.792 09:17:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.792 09:17:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:13.792 09:17:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.792 09:17:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.792 ************************************ 00:05:13.792 START TEST event_reactor 00:05:13.792 ************************************ 00:05:13.792 09:17:39 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.792 [2024-11-20 09:17:39.093029] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:13.792 [2024-11-20 09:17:39.093140] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58258 ] 00:05:14.050 [2024-11-20 09:17:39.250141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.050 [2024-11-20 09:17:39.345024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.419 test_start 00:05:15.419 oneshot 00:05:15.419 tick 100 00:05:15.419 tick 100 00:05:15.419 tick 250 00:05:15.419 tick 100 00:05:15.419 tick 100 00:05:15.419 tick 100 00:05:15.419 tick 250 00:05:15.419 tick 500 00:05:15.419 tick 100 00:05:15.419 tick 100 00:05:15.419 tick 250 00:05:15.419 tick 100 00:05:15.419 tick 100 00:05:15.419 test_end 00:05:15.419 00:05:15.419 real 0m1.407s 00:05:15.419 user 0m1.235s 00:05:15.419 sys 0m0.064s 00:05:15.419 09:17:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.419 09:17:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:15.419 ************************************ 00:05:15.419 END TEST event_reactor 00:05:15.419 ************************************ 00:05:15.419 09:17:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.419 09:17:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:15.419 09:17:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.419 09:17:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.419 ************************************ 00:05:15.419 START TEST event_reactor_perf 00:05:15.419 ************************************ 00:05:15.419 09:17:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.419 [2024-11-20 09:17:40.542335] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:15.419 [2024-11-20 09:17:40.542456] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:05:15.419 [2024-11-20 09:17:40.698591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.419 [2024-11-20 09:17:40.798640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.792 test_start 00:05:16.792 test_end 00:05:16.792 Performance: 316323 events per second 00:05:16.792 00:05:16.792 real 0m1.439s 00:05:16.792 user 0m1.272s 00:05:16.792 sys 0m0.059s 00:05:16.792 09:17:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.792 ************************************ 00:05:16.792 END TEST event_reactor_perf 00:05:16.792 ************************************ 00:05:16.792 09:17:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.792 09:17:42 event -- event/event.sh@49 -- # uname -s 00:05:16.792 09:17:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.792 09:17:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:16.792 09:17:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.792 09:17:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.792 09:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.792 ************************************ 00:05:16.792 START TEST event_scheduler 00:05:16.792 ************************************ 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:16.792 * Looking for test storage... 00:05:16.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.792 09:17:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.792 --rc genhtml_branch_coverage=1 00:05:16.792 --rc genhtml_function_coverage=1 00:05:16.792 --rc genhtml_legend=1 00:05:16.792 --rc geninfo_all_blocks=1 00:05:16.792 --rc geninfo_unexecuted_blocks=1 00:05:16.792 00:05:16.792 ' 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.792 --rc genhtml_branch_coverage=1 00:05:16.792 --rc genhtml_function_coverage=1 00:05:16.792 --rc genhtml_legend=1 00:05:16.792 --rc geninfo_all_blocks=1 00:05:16.792 --rc geninfo_unexecuted_blocks=1 00:05:16.792 00:05:16.792 ' 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.792 --rc genhtml_branch_coverage=1 00:05:16.792 --rc genhtml_function_coverage=1 00:05:16.792 --rc genhtml_legend=1 00:05:16.792 --rc geninfo_all_blocks=1 00:05:16.792 --rc geninfo_unexecuted_blocks=1 00:05:16.792 00:05:16.792 ' 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.792 --rc genhtml_branch_coverage=1 00:05:16.792 --rc genhtml_function_coverage=1 00:05:16.792 --rc genhtml_legend=1 00:05:16.792 --rc geninfo_all_blocks=1 00:05:16.792 --rc geninfo_unexecuted_blocks=1 00:05:16.792 00:05:16.792 ' 00:05:16.792 09:17:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.792 09:17:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58365 00:05:16.792 09:17:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.792 09:17:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58365 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58365 ']' 00:05:16.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.792 09:17:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.792 09:17:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.792 [2024-11-20 09:17:42.223124] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:16.793 [2024-11-20 09:17:42.223249] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58365 ] 00:05:17.050 [2024-11-20 09:17:42.380143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.050 [2024-11-20 09:17:42.484581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.050 [2024-11-20 09:17:42.484930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.050 [2024-11-20 09:17:42.485292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.050 [2024-11-20 09:17:42.485415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:17.986 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.986 POWER: Cannot set governor of lcore 0 to performance 00:05:17.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.986 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:17.986 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:17.986 POWER: Unable to set Power Management Environment for lcore 0 00:05:17.986 [2024-11-20 09:17:43.122760] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:17.986 [2024-11-20 09:17:43.122778] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:17.986 [2024-11-20 09:17:43.122787] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:17.986 [2024-11-20 09:17:43.122805] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:17.986 [2024-11-20 09:17:43.122813] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:17.986 [2024-11-20 09:17:43.122822] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 [2024-11-20 09:17:43.343678] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 ************************************ 00:05:17.986 START TEST scheduler_create_thread 00:05:17.986 ************************************ 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 2 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 3 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 4 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 5 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 6 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 7 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 8 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 9 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 10 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.986 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.261 09:17:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.193 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.193 00:05:19.193 real 0m1.170s 00:05:19.193 user 0m0.014s 00:05:19.193 sys 0m0.004s 00:05:19.193 ************************************ 00:05:19.193 END TEST scheduler_create_thread 00:05:19.193 ************************************ 00:05:19.193 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.193 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.193 09:17:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:19.193 09:17:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58365 00:05:19.193 09:17:44 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58365 ']' 00:05:19.193 09:17:44 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58365 00:05:19.193 09:17:44 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:19.193 09:17:44 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.193 09:17:44 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58365 00:05:19.193 09:17:44 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:19.194 09:17:44 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:19.194 killing process with pid 58365 00:05:19.194 09:17:44 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58365' 00:05:19.194 09:17:44 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58365 00:05:19.194 09:17:44 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58365 00:05:19.759 [2024-11-20 09:17:45.004016] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.325 00:05:20.325 real 0m3.561s 00:05:20.325 user 0m6.021s 00:05:20.325 sys 0m0.318s 00:05:20.325 09:17:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.325 ************************************ 00:05:20.325 END TEST event_scheduler 00:05:20.325 ************************************ 00:05:20.325 09:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.325 09:17:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.325 09:17:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.325 09:17:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.325 09:17:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.325 09:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.325 ************************************ 00:05:20.325 START TEST app_repeat 00:05:20.325 ************************************ 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58449 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58449' 00:05:20.325 Process app_repeat pid: 58449 00:05:20.325 spdk_app_start Round 0 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:05:20.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.325 09:17:45 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.325 09:17:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.325 [2024-11-20 09:17:45.677067] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:20.325 [2024-11-20 09:17:45.677185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58449 ] 00:05:20.583 [2024-11-20 09:17:45.837247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.583 [2024-11-20 09:17:45.940099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.583 [2024-11-20 09:17:45.940226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.149 09:17:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.149 09:17:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.149 09:17:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.406 Malloc0 00:05:21.406 09:17:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.664 Malloc1 00:05:21.665 09:17:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.665 09:17:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.922 /dev/nbd0 00:05:21.922 09:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.922 09:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.922 1+0 records in 00:05:21.922 1+0 records out 00:05:21.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276177 s, 14.8 MB/s 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.922 09:17:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.922 09:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.922 09:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.922 09:17:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.922 /dev/nbd1 00:05:22.179 09:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.179 09:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.179 09:17:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.179 09:17:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.180 1+0 records in 00:05:22.180 1+0 records out 00:05:22.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441703 s, 9.3 MB/s 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.180 09:17:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.180 { 00:05:22.180 "nbd_device": "/dev/nbd0", 00:05:22.180 "bdev_name": "Malloc0" 00:05:22.180 }, 00:05:22.180 { 00:05:22.180 "nbd_device": "/dev/nbd1", 00:05:22.180 "bdev_name": "Malloc1" 00:05:22.180 } 00:05:22.180 ]' 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.180 { 00:05:22.180 "nbd_device": "/dev/nbd0", 00:05:22.180 "bdev_name": "Malloc0" 00:05:22.180 }, 00:05:22.180 { 00:05:22.180 "nbd_device": "/dev/nbd1", 00:05:22.180 "bdev_name": "Malloc1" 00:05:22.180 } 00:05:22.180 ]' 00:05:22.180 09:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.437 /dev/nbd1' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.437 /dev/nbd1' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.437 256+0 records in 00:05:22.437 256+0 records out 00:05:22.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00832045 s, 126 MB/s 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.437 256+0 records in 00:05:22.437 256+0 records out 00:05:22.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180769 s, 58.0 MB/s 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.437 256+0 records in 00:05:22.437 256+0 records out 00:05:22.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177736 s, 59.0 MB/s 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.437 09:17:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.694 09:17:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.694 09:17:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.952 09:17:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.952 09:17:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.517 09:17:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.083 [2024-11-20 09:17:49.429161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.083 [2024-11-20 09:17:49.527244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.083 [2024-11-20 09:17:49.527409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.340 [2024-11-20 09:17:49.649603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.340 [2024-11-20 09:17:49.649661] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.862 09:17:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.862 spdk_app_start Round 1 00:05:26.862 09:17:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.862 09:17:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.862 09:17:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.862 09:17:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.862 Malloc0 00:05:26.862 09:17:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.862 Malloc1 00:05:27.120 09:17:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.120 /dev/nbd0 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.120 1+0 records in 00:05:27.120 1+0 records out 00:05:27.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169446 s, 24.2 MB/s 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.120 09:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.120 09:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.377 /dev/nbd1 00:05:27.377 09:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.377 09:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.377 1+0 records in 00:05:27.377 1+0 records out 00:05:27.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243222 s, 16.8 MB/s 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.377 09:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.378 09:17:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.378 09:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.378 09:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.378 09:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.378 09:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.378 09:17:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.378 09:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.635 09:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.635 { 00:05:27.635 "nbd_device": "/dev/nbd0", 00:05:27.635 "bdev_name": "Malloc0" 00:05:27.635 }, 00:05:27.635 { 00:05:27.635 "nbd_device": "/dev/nbd1", 00:05:27.635 "bdev_name": "Malloc1" 00:05:27.635 } 00:05:27.635 ]' 00:05:27.635 09:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.635 { 00:05:27.635 "nbd_device": "/dev/nbd0", 00:05:27.635 "bdev_name": "Malloc0" 00:05:27.635 }, 00:05:27.635 { 00:05:27.635 "nbd_device": "/dev/nbd1", 00:05:27.635 "bdev_name": "Malloc1" 00:05:27.635 } 00:05:27.635 ]' 00:05:27.635 09:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.635 09:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.635 /dev/nbd1' 00:05:27.635 09:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.635 /dev/nbd1' 00:05:27.635 09:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.635 256+0 records in 00:05:27.635 256+0 records out 00:05:27.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713277 s, 147 MB/s 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.635 256+0 records in 00:05:27.635 256+0 records out 00:05:27.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152199 s, 68.9 MB/s 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.635 256+0 records in 00:05:27.635 256+0 records out 00:05:27.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178509 s, 58.7 MB/s 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.635 09:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.892 09:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.149 09:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.406 09:17:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.406 09:17:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.664 09:17:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.229 [2024-11-20 09:17:54.569886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.229 [2024-11-20 09:17:54.649308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.229 [2024-11-20 09:17:54.649339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.486 [2024-11-20 09:17:54.746978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.486 [2024-11-20 09:17:54.747029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.011 spdk_app_start Round 2 00:05:32.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.011 09:17:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.011 09:17:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.011 09:17:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.011 09:17:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.011 09:17:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.011 Malloc0 00:05:32.011 09:17:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.269 Malloc1 00:05:32.269 09:17:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.269 09:17:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.270 09:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.528 /dev/nbd0 00:05:32.528 09:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.528 09:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.528 1+0 records in 00:05:32.528 1+0 records out 00:05:32.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263648 s, 15.5 MB/s 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.528 09:17:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.528 09:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.528 09:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.528 09:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.785 /dev/nbd1 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.785 1+0 records in 00:05:32.785 1+0 records out 00:05:32.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178777 s, 22.9 MB/s 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.785 09:17:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.785 09:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.043 { 00:05:33.043 "nbd_device": "/dev/nbd0", 00:05:33.043 "bdev_name": "Malloc0" 00:05:33.043 }, 00:05:33.043 { 00:05:33.043 "nbd_device": "/dev/nbd1", 00:05:33.043 "bdev_name": "Malloc1" 00:05:33.043 } 00:05:33.043 ]' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.043 { 00:05:33.043 "nbd_device": "/dev/nbd0", 00:05:33.043 "bdev_name": "Malloc0" 00:05:33.043 }, 00:05:33.043 { 00:05:33.043 "nbd_device": "/dev/nbd1", 00:05:33.043 "bdev_name": "Malloc1" 00:05:33.043 } 00:05:33.043 ]' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.043 /dev/nbd1' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.043 /dev/nbd1' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.043 256+0 records in 00:05:33.043 256+0 records out 00:05:33.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547978 s, 191 MB/s 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.043 256+0 records in 00:05:33.043 256+0 records out 00:05:33.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119468 s, 87.8 MB/s 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.043 256+0 records in 00:05:33.043 256+0 records out 00:05:33.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241251 s, 43.5 MB/s 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.043 09:17:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.044 09:17:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.302 09:17:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.644 09:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.902 09:17:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.902 09:17:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.160 09:17:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.728 [2024-11-20 09:18:00.009120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.728 [2024-11-20 09:18:00.093270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.728 [2024-11-20 09:18:00.093267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.985 [2024-11-20 09:18:00.193612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.985 [2024-11-20 09:18:00.193681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.525 09:18:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58449 /var/tmp/spdk-nbd.sock 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58449 ']' 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.525 09:18:02 event.app_repeat -- event/event.sh@39 -- # killprocess 58449 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58449 ']' 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58449 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58449 00:05:37.525 killing process with pid 58449 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58449' 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58449 00:05:37.525 09:18:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58449 00:05:37.786 spdk_app_start is called in Round 0. 00:05:37.786 Shutdown signal received, stop current app iteration 00:05:37.786 Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 reinitialization... 00:05:37.786 spdk_app_start is called in Round 1. 00:05:37.786 Shutdown signal received, stop current app iteration 00:05:37.786 Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 reinitialization... 00:05:37.786 spdk_app_start is called in Round 2. 00:05:37.786 Shutdown signal received, stop current app iteration 00:05:37.786 Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 reinitialization... 00:05:37.786 spdk_app_start is called in Round 3. 00:05:37.786 Shutdown signal received, stop current app iteration 00:05:37.786 ************************************ 00:05:37.786 END TEST app_repeat 00:05:37.786 ************************************ 00:05:37.786 09:18:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:37.786 09:18:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:37.786 00:05:37.786 real 0m17.577s 00:05:37.786 user 0m38.281s 00:05:37.786 sys 0m2.125s 00:05:37.786 09:18:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.786 09:18:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.047 09:18:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:38.047 09:18:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:38.047 09:18:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.047 09:18:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.047 09:18:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.047 ************************************ 00:05:38.047 START TEST cpu_locks 00:05:38.047 ************************************ 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:38.047 * Looking for test storage... 00:05:38.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.047 09:18:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.047 --rc genhtml_branch_coverage=1 00:05:38.047 --rc genhtml_function_coverage=1 00:05:38.047 --rc genhtml_legend=1 00:05:38.047 --rc geninfo_all_blocks=1 00:05:38.047 --rc geninfo_unexecuted_blocks=1 00:05:38.047 00:05:38.047 ' 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.047 --rc genhtml_branch_coverage=1 00:05:38.047 --rc genhtml_function_coverage=1 00:05:38.047 --rc genhtml_legend=1 00:05:38.047 --rc geninfo_all_blocks=1 00:05:38.047 --rc geninfo_unexecuted_blocks=1 00:05:38.047 00:05:38.047 ' 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.047 --rc genhtml_branch_coverage=1 00:05:38.047 --rc genhtml_function_coverage=1 00:05:38.047 --rc genhtml_legend=1 00:05:38.047 --rc geninfo_all_blocks=1 00:05:38.047 --rc geninfo_unexecuted_blocks=1 00:05:38.047 00:05:38.047 ' 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.047 --rc genhtml_branch_coverage=1 00:05:38.047 --rc genhtml_function_coverage=1 00:05:38.047 --rc genhtml_legend=1 00:05:38.047 --rc geninfo_all_blocks=1 00:05:38.047 --rc geninfo_unexecuted_blocks=1 00:05:38.047 00:05:38.047 ' 00:05:38.047 09:18:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.047 09:18:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.047 09:18:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.047 09:18:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.047 09:18:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.047 ************************************ 00:05:38.047 START TEST default_locks 00:05:38.047 ************************************ 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58878 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58878 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58878 ']' 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.047 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.048 09:18:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.048 09:18:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.048 [2024-11-20 09:18:03.497838] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:38.048 [2024-11-20 09:18:03.497965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:05:38.306 [2024-11-20 09:18:03.656091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.306 [2024-11-20 09:18:03.739189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.246 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.246 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58878 ']' 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.247 killing process with pid 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58878' 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58878 00:05:39.247 09:18:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58878 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58878 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58878 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58878 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58878 ']' 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.143 ERROR: process (pid: 58878) is no longer running 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.143 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58878) - No such process 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.143 00:05:41.143 real 0m2.691s 00:05:41.143 user 0m2.700s 00:05:41.143 sys 0m0.461s 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.143 09:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.143 ************************************ 00:05:41.143 END TEST default_locks 00:05:41.143 ************************************ 00:05:41.143 09:18:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:41.143 09:18:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.143 09:18:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.143 09:18:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.143 ************************************ 00:05:41.143 START TEST default_locks_via_rpc 00:05:41.143 ************************************ 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58938 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58938 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58938 ']' 00:05:41.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.143 09:18:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.143 [2024-11-20 09:18:06.242511] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:41.143 [2024-11-20 09:18:06.242644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:05:41.143 [2024-11-20 09:18:06.401227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.143 [2024-11-20 09:18:06.504924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:41.709 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58938 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58938 00:05:41.710 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.967 09:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58938 00:05:41.967 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58938 ']' 00:05:41.967 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58938 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58938 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.968 killing process with pid 58938 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58938' 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58938 00:05:41.968 09:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58938 00:05:43.869 00:05:43.869 real 0m2.729s 00:05:43.869 user 0m2.689s 00:05:43.869 sys 0m0.406s 00:05:43.869 09:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.869 ************************************ 00:05:43.869 END TEST default_locks_via_rpc 00:05:43.869 ************************************ 00:05:43.869 09:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.869 09:18:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:43.869 09:18:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.869 09:18:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.869 09:18:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.869 ************************************ 00:05:43.869 START TEST non_locking_app_on_locked_coremask 00:05:43.869 ************************************ 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59001 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59001 /var/tmp/spdk.sock 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59001 ']' 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.869 09:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.869 [2024-11-20 09:18:09.032144] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:43.869 [2024-11-20 09:18:09.032270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:05:43.869 [2024-11-20 09:18:09.185276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.869 [2024-11-20 09:18:09.286248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59017 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59017 /var/tmp/spdk2.sock 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59017 ']' 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.436 09:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.694 [2024-11-20 09:18:09.969666] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:44.694 [2024-11-20 09:18:09.969838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59017 ] 00:05:44.952 [2024-11-20 09:18:10.158592] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.952 [2024-11-20 09:18:10.158653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.952 [2024-11-20 09:18:10.358611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.326 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.326 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.326 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59001 00:05:46.326 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59001 00:05:46.326 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59001 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59001 ']' 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59001 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59001 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.583 killing process with pid 59001 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59001' 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59001 00:05:46.583 09:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59001 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59017 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59017 ']' 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59017 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59017 00:05:49.997 killing process with pid 59017 00:05:49.997 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.998 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.998 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59017' 00:05:49.998 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59017 00:05:49.998 09:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59017 00:05:50.930 00:05:50.930 real 0m7.422s 00:05:50.930 user 0m7.633s 00:05:50.930 sys 0m0.890s 00:05:50.930 09:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.930 ************************************ 00:05:50.930 END TEST non_locking_app_on_locked_coremask 00:05:50.930 ************************************ 00:05:50.930 09:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 09:18:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.188 09:18:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.188 09:18:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.188 09:18:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 ************************************ 00:05:51.188 START TEST locking_app_on_unlocked_coremask 00:05:51.188 ************************************ 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59119 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59119 /var/tmp/spdk.sock 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59119 ']' 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.188 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.189 09:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.189 [2024-11-20 09:18:16.517040] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:51.189 [2024-11-20 09:18:16.517175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:05:51.446 [2024-11-20 09:18:16.675573] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.446 [2024-11-20 09:18:16.675628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.446 [2024-11-20 09:18:16.751472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59135 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59135 /var/tmp/spdk2.sock 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59135 ']' 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.054 09:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.054 [2024-11-20 09:18:17.425668] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:52.054 [2024-11-20 09:18:17.425790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59135 ] 00:05:52.342 [2024-11-20 09:18:17.587127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.342 [2024-11-20 09:18:17.744414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.275 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.276 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.276 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59135 00:05:53.276 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59135 00:05:53.276 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59119 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59119 ']' 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59119 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59119 00:05:53.534 killing process with pid 59119 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59119' 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59119 00:05:53.534 09:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59119 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59135 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59135 ']' 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59135 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59135 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.203 killing process with pid 59135 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59135' 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59135 00:05:56.203 09:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59135 00:05:57.137 00:05:57.137 real 0m6.128s 00:05:57.137 user 0m6.419s 00:05:57.137 sys 0m0.792s 00:05:57.137 09:18:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.137 ************************************ 00:05:57.137 09:18:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.137 END TEST locking_app_on_unlocked_coremask 00:05:57.137 ************************************ 00:05:57.396 09:18:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.396 09:18:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.396 09:18:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.396 09:18:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.396 ************************************ 00:05:57.396 START TEST locking_app_on_locked_coremask 00:05:57.396 ************************************ 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59226 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59226 /var/tmp/spdk.sock 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59226 ']' 00:05:57.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.396 09:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.396 [2024-11-20 09:18:22.702860] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:57.396 [2024-11-20 09:18:22.702982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:05:57.655 [2024-11-20 09:18:22.858901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.655 [2024-11-20 09:18:22.940047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59242 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59242 /var/tmp/spdk2.sock 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59242 /var/tmp/spdk2.sock 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59242 /var/tmp/spdk2.sock 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59242 ']' 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.220 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.221 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.221 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.221 09:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.221 [2024-11-20 09:18:23.606758] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:58.221 [2024-11-20 09:18:23.606874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59242 ] 00:05:58.479 [2024-11-20 09:18:23.769206] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59226 has claimed it. 00:05:58.479 [2024-11-20 09:18:23.769256] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59242) - No such process 00:05:59.046 ERROR: process (pid: 59242) is no longer running 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59226 ']' 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.046 killing process with pid 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59226' 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59226 00:05:59.046 09:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59226 00:06:00.417 00:06:00.417 real 0m2.999s 00:06:00.417 user 0m3.240s 00:06:00.417 sys 0m0.496s 00:06:00.417 09:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.417 09:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.417 ************************************ 00:06:00.417 END TEST locking_app_on_locked_coremask 00:06:00.417 ************************************ 00:06:00.417 09:18:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.417 09:18:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.417 09:18:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.417 09:18:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.417 ************************************ 00:06:00.417 START TEST locking_overlapped_coremask 00:06:00.417 ************************************ 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59295 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59295 /var/tmp/spdk.sock 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59295 ']' 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.417 09:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.417 [2024-11-20 09:18:25.740419] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:00.417 [2024-11-20 09:18:25.740538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59295 ] 00:06:00.674 [2024-11-20 09:18:25.898812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.674 [2024-11-20 09:18:26.003841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.674 [2024-11-20 09:18:26.004220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.674 [2024-11-20 09:18:26.004317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59313 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59313 /var/tmp/spdk2.sock 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59313 /var/tmp/spdk2.sock 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59313 /var/tmp/spdk2.sock 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59313 ']' 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.238 09:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.553 [2024-11-20 09:18:26.692812] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:01.553 [2024-11-20 09:18:26.692935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59313 ] 00:06:01.553 [2024-11-20 09:18:26.866232] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59295 has claimed it. 00:06:01.553 [2024-11-20 09:18:26.866289] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59313) - No such process 00:06:02.133 ERROR: process (pid: 59313) is no longer running 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59295 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59295 ']' 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59295 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59295 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.133 killing process with pid 59295 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59295' 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59295 00:06:02.133 09:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59295 00:06:03.504 00:06:03.504 real 0m3.176s 00:06:03.504 user 0m8.656s 00:06:03.504 sys 0m0.435s 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.504 ************************************ 00:06:03.504 END TEST locking_overlapped_coremask 00:06:03.504 ************************************ 00:06:03.504 09:18:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.504 09:18:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.504 09:18:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.504 09:18:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.504 ************************************ 00:06:03.504 START TEST locking_overlapped_coremask_via_rpc 00:06:03.504 ************************************ 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59366 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59366 /var/tmp/spdk.sock 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59366 ']' 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.504 09:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.762 [2024-11-20 09:18:28.960285] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:03.762 [2024-11-20 09:18:28.960766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59366 ] 00:06:03.762 [2024-11-20 09:18:29.122173] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.762 [2024-11-20 09:18:29.122218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.019 [2024-11-20 09:18:29.225046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.019 [2024-11-20 09:18:29.225133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.019 [2024-11-20 09:18:29.225143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59384 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59384 /var/tmp/spdk2.sock 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59384 ']' 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.584 09:18:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.584 [2024-11-20 09:18:29.897736] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:04.584 [2024-11-20 09:18:29.897857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59384 ] 00:06:04.842 [2024-11-20 09:18:30.073565] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.842 [2024-11-20 09:18:30.073623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.842 [2024-11-20 09:18:30.280925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.842 [2024-11-20 09:18:30.284930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.842 [2024-11-20 09:18:30.284951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.213 [2024-11-20 09:18:31.492439] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59366 has claimed it. 00:06:06.213 request: 00:06:06.213 { 00:06:06.213 "method": "framework_enable_cpumask_locks", 00:06:06.213 "req_id": 1 00:06:06.213 } 00:06:06.213 Got JSON-RPC error response 00:06:06.213 response: 00:06:06.213 { 00:06:06.213 "code": -32603, 00:06:06.213 "message": "Failed to claim CPU core: 2" 00:06:06.213 } 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59366 /var/tmp/spdk.sock 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59366 ']' 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59384 /var/tmp/spdk2.sock 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59384 ']' 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.213 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.472 00:06:06.472 real 0m2.997s 00:06:06.472 user 0m1.034s 00:06:06.472 sys 0m0.126s 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.472 09:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.472 ************************************ 00:06:06.472 END TEST locking_overlapped_coremask_via_rpc 00:06:06.472 ************************************ 00:06:06.472 09:18:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.472 09:18:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59366 ]] 00:06:06.472 09:18:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59366 00:06:06.472 09:18:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59366 ']' 00:06:06.472 09:18:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59366 00:06:06.472 09:18:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.472 09:18:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.472 09:18:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59366 00:06:06.730 09:18:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.730 09:18:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.730 09:18:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59366' 00:06:06.730 killing process with pid 59366 00:06:06.730 09:18:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59366 00:06:06.730 09:18:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59366 00:06:08.117 09:18:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59384 ]] 00:06:08.117 09:18:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59384 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59384 ']' 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59384 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59384 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:08.118 killing process with pid 59384 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59384' 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59384 00:06:08.118 09:18:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59384 00:06:09.490 09:18:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.490 Process with pid 59366 is not found 00:06:09.491 09:18:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.491 09:18:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59366 ]] 00:06:09.491 09:18:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59366 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59366 ']' 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59366 00:06:09.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59366) - No such process 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59366 is not found' 00:06:09.491 Process with pid 59384 is not found 00:06:09.491 09:18:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59384 ]] 00:06:09.491 09:18:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59384 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59384 ']' 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59384 00:06:09.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59384) - No such process 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59384 is not found' 00:06:09.491 09:18:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.491 00:06:09.491 real 0m31.334s 00:06:09.491 user 0m53.561s 00:06:09.491 sys 0m4.459s 00:06:09.491 ************************************ 00:06:09.491 END TEST cpu_locks 00:06:09.491 ************************************ 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.491 09:18:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.491 ************************************ 00:06:09.491 END TEST event 00:06:09.491 ************************************ 00:06:09.491 00:06:09.491 real 0m57.199s 00:06:09.491 user 1m44.798s 00:06:09.491 sys 0m7.316s 00:06:09.491 09:18:34 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.491 09:18:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.491 09:18:34 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.491 09:18:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.491 09:18:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.491 09:18:34 -- common/autotest_common.sh@10 -- # set +x 00:06:09.491 ************************************ 00:06:09.491 START TEST thread 00:06:09.491 ************************************ 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.491 * Looking for test storage... 00:06:09.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.491 09:18:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.491 09:18:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.491 09:18:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.491 09:18:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.491 09:18:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.491 09:18:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.491 09:18:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.491 09:18:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.491 09:18:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.491 09:18:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.491 09:18:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.491 09:18:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:09.491 09:18:34 thread -- scripts/common.sh@345 -- # : 1 00:06:09.491 09:18:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.491 09:18:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.491 09:18:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:09.491 09:18:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:09.491 09:18:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.491 09:18:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:09.491 09:18:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.491 09:18:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:09.491 09:18:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:09.491 09:18:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.491 09:18:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:09.491 09:18:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.491 09:18:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.491 09:18:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.491 09:18:34 thread -- scripts/common.sh@368 -- # return 0 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.491 --rc genhtml_branch_coverage=1 00:06:09.491 --rc genhtml_function_coverage=1 00:06:09.491 --rc genhtml_legend=1 00:06:09.491 --rc geninfo_all_blocks=1 00:06:09.491 --rc geninfo_unexecuted_blocks=1 00:06:09.491 00:06:09.491 ' 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.491 --rc genhtml_branch_coverage=1 00:06:09.491 --rc genhtml_function_coverage=1 00:06:09.491 --rc genhtml_legend=1 00:06:09.491 --rc geninfo_all_blocks=1 00:06:09.491 --rc geninfo_unexecuted_blocks=1 00:06:09.491 00:06:09.491 ' 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.491 --rc genhtml_branch_coverage=1 00:06:09.491 --rc genhtml_function_coverage=1 00:06:09.491 --rc genhtml_legend=1 00:06:09.491 --rc geninfo_all_blocks=1 00:06:09.491 --rc geninfo_unexecuted_blocks=1 00:06:09.491 00:06:09.491 ' 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.491 --rc genhtml_branch_coverage=1 00:06:09.491 --rc genhtml_function_coverage=1 00:06:09.491 --rc genhtml_legend=1 00:06:09.491 --rc geninfo_all_blocks=1 00:06:09.491 --rc geninfo_unexecuted_blocks=1 00:06:09.491 00:06:09.491 ' 00:06:09.491 09:18:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.491 09:18:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.491 ************************************ 00:06:09.491 START TEST thread_poller_perf 00:06:09.491 ************************************ 00:06:09.491 09:18:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.491 [2024-11-20 09:18:34.890059] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:09.491 [2024-11-20 09:18:34.890287] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59544 ] 00:06:09.749 [2024-11-20 09:18:35.047242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.749 [2024-11-20 09:18:35.148895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.749 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.123 [2024-11-20T09:18:36.579Z] ====================================== 00:06:11.123 [2024-11-20T09:18:36.579Z] busy:2610963958 (cyc) 00:06:11.123 [2024-11-20T09:18:36.579Z] total_run_count: 299000 00:06:11.123 [2024-11-20T09:18:36.579Z] tsc_hz: 2600000000 (cyc) 00:06:11.123 [2024-11-20T09:18:36.579Z] ====================================== 00:06:11.123 [2024-11-20T09:18:36.579Z] poller_cost: 8732 (cyc), 3358 (nsec) 00:06:11.123 00:06:11.123 real 0m1.454s 00:06:11.123 user 0m1.276s 00:06:11.123 sys 0m0.069s 00:06:11.123 09:18:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.123 09:18:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.123 ************************************ 00:06:11.123 END TEST thread_poller_perf 00:06:11.123 ************************************ 00:06:11.123 09:18:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.123 09:18:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.123 09:18:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.123 09:18:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.123 ************************************ 00:06:11.123 START TEST thread_poller_perf 00:06:11.123 ************************************ 00:06:11.123 09:18:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.123 [2024-11-20 09:18:36.404366] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:11.124 [2024-11-20 09:18:36.404472] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59581 ] 00:06:11.124 [2024-11-20 09:18:36.560623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.381 [2024-11-20 09:18:36.659119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.381 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.755 [2024-11-20T09:18:38.211Z] ====================================== 00:06:12.755 [2024-11-20T09:18:38.211Z] busy:2603277574 (cyc) 00:06:12.755 [2024-11-20T09:18:38.211Z] total_run_count: 3820000 00:06:12.755 [2024-11-20T09:18:38.211Z] tsc_hz: 2600000000 (cyc) 00:06:12.755 [2024-11-20T09:18:38.211Z] ====================================== 00:06:12.755 [2024-11-20T09:18:38.211Z] poller_cost: 681 (cyc), 261 (nsec) 00:06:12.755 00:06:12.755 real 0m1.436s 00:06:12.755 user 0m1.269s 00:06:12.755 sys 0m0.060s 00:06:12.755 ************************************ 00:06:12.755 END TEST thread_poller_perf 00:06:12.755 ************************************ 00:06:12.755 09:18:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.755 09:18:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.755 09:18:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.755 ************************************ 00:06:12.755 END TEST thread 00:06:12.755 ************************************ 00:06:12.755 00:06:12.755 real 0m3.150s 00:06:12.755 user 0m2.669s 00:06:12.755 sys 0m0.238s 00:06:12.755 09:18:37 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.755 09:18:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.755 09:18:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:12.755 09:18:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.755 09:18:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.755 09:18:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.756 09:18:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.756 ************************************ 00:06:12.756 START TEST app_cmdline 00:06:12.756 ************************************ 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.756 * Looking for test storage... 00:06:12.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:12.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.756 09:18:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 09:18:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.756 09:18:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59664 00:06:12.756 09:18:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59664 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59664 ']' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.756 09:18:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.756 09:18:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.756 [2024-11-20 09:18:38.074002] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:12.756 [2024-11-20 09:18:38.074288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59664 ] 00:06:13.013 [2024-11-20 09:18:38.233263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.013 [2024-11-20 09:18:38.345229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.579 09:18:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.579 09:18:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:13.579 09:18:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:13.862 { 00:06:13.862 "version": "SPDK v25.01-pre git sha1 2741dd1ac", 00:06:13.862 "fields": { 00:06:13.862 "major": 25, 00:06:13.862 "minor": 1, 00:06:13.862 "patch": 0, 00:06:13.862 "suffix": "-pre", 00:06:13.862 "commit": "2741dd1ac" 00:06:13.862 } 00:06:13.862 } 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.862 09:18:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:13.862 09:18:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.119 request: 00:06:14.119 { 00:06:14.119 "method": "env_dpdk_get_mem_stats", 00:06:14.119 "req_id": 1 00:06:14.119 } 00:06:14.119 Got JSON-RPC error response 00:06:14.119 response: 00:06:14.119 { 00:06:14.119 "code": -32601, 00:06:14.119 "message": "Method not found" 00:06:14.119 } 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.119 09:18:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59664 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59664 ']' 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59664 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59664 00:06:14.119 killing process with pid 59664 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59664' 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 59664 00:06:14.119 09:18:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 59664 00:06:15.492 ************************************ 00:06:15.492 END TEST app_cmdline 00:06:15.492 ************************************ 00:06:15.492 00:06:15.492 real 0m3.016s 00:06:15.492 user 0m3.341s 00:06:15.492 sys 0m0.418s 00:06:15.492 09:18:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.492 09:18:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.492 09:18:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:15.492 09:18:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.492 09:18:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.492 09:18:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.751 ************************************ 00:06:15.751 START TEST version 00:06:15.751 ************************************ 00:06:15.751 09:18:40 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:15.751 * Looking for test storage... 00:06:15.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.751 09:18:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.751 09:18:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.751 09:18:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.751 09:18:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.751 09:18:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.751 09:18:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.751 09:18:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.751 09:18:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.751 09:18:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.751 09:18:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.751 09:18:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.751 09:18:41 version -- scripts/common.sh@344 -- # case "$op" in 00:06:15.751 09:18:41 version -- scripts/common.sh@345 -- # : 1 00:06:15.751 09:18:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.751 09:18:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.751 09:18:41 version -- scripts/common.sh@365 -- # decimal 1 00:06:15.751 09:18:41 version -- scripts/common.sh@353 -- # local d=1 00:06:15.751 09:18:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.751 09:18:41 version -- scripts/common.sh@355 -- # echo 1 00:06:15.751 09:18:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.751 09:18:41 version -- scripts/common.sh@366 -- # decimal 2 00:06:15.751 09:18:41 version -- scripts/common.sh@353 -- # local d=2 00:06:15.751 09:18:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.751 09:18:41 version -- scripts/common.sh@355 -- # echo 2 00:06:15.751 09:18:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.751 09:18:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.751 09:18:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.751 09:18:41 version -- scripts/common.sh@368 -- # return 0 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.751 --rc genhtml_branch_coverage=1 00:06:15.751 --rc genhtml_function_coverage=1 00:06:15.751 --rc genhtml_legend=1 00:06:15.751 --rc geninfo_all_blocks=1 00:06:15.751 --rc geninfo_unexecuted_blocks=1 00:06:15.751 00:06:15.751 ' 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.751 --rc genhtml_branch_coverage=1 00:06:15.751 --rc genhtml_function_coverage=1 00:06:15.751 --rc genhtml_legend=1 00:06:15.751 --rc geninfo_all_blocks=1 00:06:15.751 --rc geninfo_unexecuted_blocks=1 00:06:15.751 00:06:15.751 ' 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.751 --rc genhtml_branch_coverage=1 00:06:15.751 --rc genhtml_function_coverage=1 00:06:15.751 --rc genhtml_legend=1 00:06:15.751 --rc geninfo_all_blocks=1 00:06:15.751 --rc geninfo_unexecuted_blocks=1 00:06:15.751 00:06:15.751 ' 00:06:15.751 09:18:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.751 --rc genhtml_branch_coverage=1 00:06:15.751 --rc genhtml_function_coverage=1 00:06:15.751 --rc genhtml_legend=1 00:06:15.751 --rc geninfo_all_blocks=1 00:06:15.751 --rc geninfo_unexecuted_blocks=1 00:06:15.751 00:06:15.751 ' 00:06:15.751 09:18:41 version -- app/version.sh@17 -- # get_header_version major 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # cut -f2 00:06:15.751 09:18:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.751 09:18:41 version -- app/version.sh@17 -- # major=25 00:06:15.751 09:18:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:15.751 09:18:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # cut -f2 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.751 09:18:41 version -- app/version.sh@18 -- # minor=1 00:06:15.751 09:18:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:15.751 09:18:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # cut -f2 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.751 09:18:41 version -- app/version.sh@19 -- # patch=0 00:06:15.751 09:18:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:15.751 09:18:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # cut -f2 00:06:15.751 09:18:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.751 09:18:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:15.751 09:18:41 version -- app/version.sh@22 -- # version=25.1 00:06:15.751 09:18:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.751 09:18:41 version -- app/version.sh@28 -- # version=25.1rc0 00:06:15.752 09:18:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:15.752 09:18:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.752 09:18:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:15.752 09:18:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:15.752 00:06:15.752 real 0m0.210s 00:06:15.752 user 0m0.134s 00:06:15.752 sys 0m0.097s 00:06:15.752 ************************************ 00:06:15.752 END TEST version 00:06:15.752 ************************************ 00:06:15.752 09:18:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.752 09:18:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:16.011 09:18:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:16.011 09:18:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:16.011 09:18:41 -- spdk/autotest.sh@194 -- # uname -s 00:06:16.011 09:18:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:16.011 09:18:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:16.011 09:18:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:16.011 09:18:41 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:16.011 09:18:41 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:16.011 09:18:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.011 09:18:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.011 09:18:41 -- common/autotest_common.sh@10 -- # set +x 00:06:16.011 ************************************ 00:06:16.011 START TEST blockdev_nvme 00:06:16.011 ************************************ 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:16.011 * Looking for test storage... 00:06:16.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.011 09:18:41 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.011 --rc genhtml_branch_coverage=1 00:06:16.011 --rc genhtml_function_coverage=1 00:06:16.011 --rc genhtml_legend=1 00:06:16.011 --rc geninfo_all_blocks=1 00:06:16.011 --rc geninfo_unexecuted_blocks=1 00:06:16.011 00:06:16.011 ' 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.011 --rc genhtml_branch_coverage=1 00:06:16.011 --rc genhtml_function_coverage=1 00:06:16.011 --rc genhtml_legend=1 00:06:16.011 --rc geninfo_all_blocks=1 00:06:16.011 --rc geninfo_unexecuted_blocks=1 00:06:16.011 00:06:16.011 ' 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.011 --rc genhtml_branch_coverage=1 00:06:16.011 --rc genhtml_function_coverage=1 00:06:16.011 --rc genhtml_legend=1 00:06:16.011 --rc geninfo_all_blocks=1 00:06:16.011 --rc geninfo_unexecuted_blocks=1 00:06:16.011 00:06:16.011 ' 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.011 --rc genhtml_branch_coverage=1 00:06:16.011 --rc genhtml_function_coverage=1 00:06:16.011 --rc genhtml_legend=1 00:06:16.011 --rc geninfo_all_blocks=1 00:06:16.011 --rc geninfo_unexecuted_blocks=1 00:06:16.011 00:06:16.011 ' 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:16.011 09:18:41 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59842 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59842 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59842 ']' 00:06:16.011 09:18:41 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.011 09:18:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.269 [2024-11-20 09:18:41.480899] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:16.269 [2024-11-20 09:18:41.481444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59842 ] 00:06:16.269 [2024-11-20 09:18:41.644779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.527 [2024-11-20 09:18:41.745756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.093 09:18:42 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.093 09:18:42 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:17.093 09:18:42 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:17.093 09:18:42 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:17.093 09:18:42 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:17.093 09:18:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:17.093 09:18:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.093 09:18:42 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:17.093 09:18:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.093 09:18:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.352 09:18:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:17.353 09:18:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:17.353 09:18:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:17.354 09:18:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d5ff17f4-0069-4b12-9933-ae7cced758d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d5ff17f4-0069-4b12-9933-ae7cced758d2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f71c28df-df8a-4185-a826-70f22d95a8f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f71c28df-df8a-4185-a826-70f22d95a8f3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2edff3a2-73c7-4529-a9a1-a2621fe9824b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2edff3a2-73c7-4529-a9a1-a2621fe9824b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4c16fa05-4658-41da-9385-bef6cdd764df"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c16fa05-4658-41da-9385-bef6cdd764df",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0e56ec0d-0f82-4bad-afe1-0c1decf7b884"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0e56ec0d-0f82-4bad-afe1-0c1decf7b884",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c7573994-3fc2-48fa-920d-d26594021202"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c7573994-3fc2-48fa-920d-d26594021202",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:17.612 09:18:42 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:17.612 09:18:42 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:17.612 09:18:42 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:17.612 09:18:42 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59842 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59842 ']' 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59842 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59842 00:06:17.612 killing process with pid 59842 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59842' 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59842 00:06:17.612 09:18:42 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59842 00:06:18.985 09:18:44 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:18.985 09:18:44 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:18.985 09:18:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:18.985 09:18:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.985 09:18:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:18.985 ************************************ 00:06:18.985 START TEST bdev_hello_world 00:06:18.985 ************************************ 00:06:18.985 09:18:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:19.243 [2024-11-20 09:18:44.440517] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:19.243 [2024-11-20 09:18:44.440778] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59926 ] 00:06:19.243 [2024-11-20 09:18:44.600630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.500 [2024-11-20 09:18:44.704383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.077 [2024-11-20 09:18:45.240890] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:20.077 [2024-11-20 09:18:45.240947] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:20.077 [2024-11-20 09:18:45.240966] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:20.077 [2024-11-20 09:18:45.243428] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:20.077 [2024-11-20 09:18:45.244575] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:20.077 [2024-11-20 09:18:45.244605] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:20.077 [2024-11-20 09:18:45.245233] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:20.077 00:06:20.077 [2024-11-20 09:18:45.245261] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:20.643 00:06:20.643 real 0m1.577s 00:06:20.643 user 0m1.295s 00:06:20.643 sys 0m0.174s 00:06:20.643 09:18:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.643 ************************************ 00:06:20.643 END TEST bdev_hello_world 00:06:20.643 ************************************ 00:06:20.643 09:18:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:20.643 09:18:46 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:20.643 09:18:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.643 09:18:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.643 09:18:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.643 ************************************ 00:06:20.643 START TEST bdev_bounds 00:06:20.643 ************************************ 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:20.643 Process bdevio pid: 59962 00:06:20.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59962 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59962' 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59962 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59962 ']' 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.643 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:20.643 [2024-11-20 09:18:46.077917] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:20.643 [2024-11-20 09:18:46.078037] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59962 ] 00:06:20.901 [2024-11-20 09:18:46.238434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.901 [2024-11-20 09:18:46.341399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.901 [2024-11-20 09:18:46.341766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.901 [2024-11-20 09:18:46.341768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.835 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.835 09:18:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:21.835 09:18:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:21.835 I/O targets: 00:06:21.835 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:21.835 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:21.835 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:21.835 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:21.835 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:21.835 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:21.835 00:06:21.835 00:06:21.835 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.835 http://cunit.sourceforge.net/ 00:06:21.835 00:06:21.835 00:06:21.835 Suite: bdevio tests on: Nvme3n1 00:06:21.835 Test: blockdev write read block ...passed 00:06:21.835 Test: blockdev write zeroes read block ...passed 00:06:21.835 Test: blockdev write zeroes read no split ...passed 00:06:21.835 Test: blockdev write zeroes read split ...passed 00:06:21.835 Test: blockdev write zeroes read split partial ...passed 00:06:21.835 Test: blockdev reset ...[2024-11-20 09:18:47.062480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:21.835 passed 00:06:21.835 Test: blockdev write read 8 blocks ...[2024-11-20 09:18:47.067208] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:21.835 passed 00:06:21.835 Test: blockdev write read size > 128k ...passed 00:06:21.835 Test: blockdev write read invalid size ...passed 00:06:21.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:21.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:21.835 Test: blockdev write read max offset ...passed 00:06:21.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:21.835 Test: blockdev writev readv 8 blocks ...passed 00:06:21.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:21.835 Test: blockdev writev readv block ...passed 00:06:21.835 Test: blockdev writev readv size > 128k ...passed 00:06:21.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:21.835 Test: blockdev comparev and writev ...[2024-11-20 09:18:47.085395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a0a000 len:0x1000 00:06:21.835 [2024-11-20 09:18:47.085529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:21.835 passed 00:06:21.835 Test: blockdev nvme passthru rw ...passed 00:06:21.835 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:18:47.087714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:21.835 [2024-11-20 09:18:47.087746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:21.835 passed 00:06:21.835 Test: blockdev nvme admin passthru ...passed 00:06:21.835 Test: blockdev copy ...passed 00:06:21.835 Suite: bdevio tests on: Nvme2n3 00:06:21.835 Test: blockdev write read block ...passed 00:06:21.835 Test: blockdev write zeroes read block ...passed 00:06:21.835 Test: blockdev write zeroes read no split ...passed 00:06:21.835 Test: blockdev write zeroes read split ...passed 00:06:21.835 Test: blockdev write zeroes read split partial ...passed 00:06:21.835 Test: blockdev reset ...[2024-11-20 09:18:47.144764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:21.835 [2024-11-20 09:18:47.149287] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:21.835 Test: blockdev write read 8 blocks ...uccessful. 00:06:21.835 passed 00:06:21.835 Test: blockdev write read size > 128k ...passed 00:06:21.835 Test: blockdev write read invalid size ...passed 00:06:21.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:21.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:21.835 Test: blockdev write read max offset ...passed 00:06:21.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:21.835 Test: blockdev writev readv 8 blocks ...passed 00:06:21.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:21.835 Test: blockdev writev readv block ...passed 00:06:21.835 Test: blockdev writev readv size > 128k ...passed 00:06:21.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:21.835 Test: blockdev comparev and writev ...[2024-11-20 09:18:47.168496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:21.835 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x29bc06000 len:0x1000 00:06:21.835 [2024-11-20 09:18:47.168618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:21.835 passed 00:06:21.835 Test: blockdev nvme passthru vendor specific ...passed 00:06:21.835 Test: blockdev nvme admin passthru ...[2024-11-20 09:18:47.170580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:21.835 [2024-11-20 09:18:47.170613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:21.835 passed 00:06:21.835 Test: blockdev copy ...passed 00:06:21.835 Suite: bdevio tests on: Nvme2n2 00:06:21.835 Test: blockdev write read block ...passed 00:06:21.835 Test: blockdev write zeroes read block ...passed 00:06:21.835 Test: blockdev write zeroes read no split ...passed 00:06:21.835 Test: blockdev write zeroes read split ...passed 00:06:21.835 Test: blockdev write zeroes read split partial ...passed 00:06:21.835 Test: blockdev reset ...[2024-11-20 09:18:47.227643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:21.835 [2024-11-20 09:18:47.232173] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:21.835 Test: blockdev write read 8 blocks ...uccessful. 00:06:21.835 passed 00:06:21.835 Test: blockdev write read size > 128k ...passed 00:06:21.835 Test: blockdev write read invalid size ...passed 00:06:21.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:21.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:21.835 Test: blockdev write read max offset ...passed 00:06:21.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:21.835 Test: blockdev writev readv 8 blocks ...passed 00:06:21.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:21.835 Test: blockdev writev readv block ...passed 00:06:21.835 Test: blockdev writev readv size > 128k ...passed 00:06:21.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:21.835 Test: blockdev comparev and writev ...[2024-11-20 09:18:47.251892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:21.835 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d423c000 len:0x1000 00:06:21.835 [2024-11-20 09:18:47.252027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:21.835 passed 00:06:21.835 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:18:47.254276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:21.835 [2024-11-20 09:18:47.254330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:21.835 passed 00:06:21.835 Test: blockdev nvme admin passthru ...passed 00:06:21.835 Test: blockdev copy ...passed 00:06:21.835 Suite: bdevio tests on: Nvme2n1 00:06:21.835 Test: blockdev write read block ...passed 00:06:21.835 Test: blockdev write zeroes read block ...passed 00:06:21.835 Test: blockdev write zeroes read no split ...passed 00:06:22.093 Test: blockdev write zeroes read split ...passed 00:06:22.093 Test: blockdev write zeroes read split partial ...passed 00:06:22.093 Test: blockdev reset ...[2024-11-20 09:18:47.311201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:22.093 [2024-11-20 09:18:47.315366] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:22.093 Test: blockdev write read 8 blocks ...uccessful. 00:06:22.093 passed 00:06:22.093 Test: blockdev write read size > 128k ...passed 00:06:22.093 Test: blockdev write read invalid size ...passed 00:06:22.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.093 Test: blockdev write read max offset ...passed 00:06:22.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.093 Test: blockdev writev readv 8 blocks ...passed 00:06:22.093 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.093 Test: blockdev writev readv block ...passed 00:06:22.093 Test: blockdev writev readv size > 128k ...passed 00:06:22.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.093 Test: blockdev comparev and writev ...[2024-11-20 09:18:47.334444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4238000 len:0x1000 00:06:22.093 [2024-11-20 09:18:47.334493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.093 passed 00:06:22.093 Test: blockdev nvme passthru rw ...passed 00:06:22.093 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:18:47.336643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:22.093 [2024-11-20 09:18:47.336672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:22.093 passed 00:06:22.093 Test: blockdev nvme admin passthru ...passed 00:06:22.093 Test: blockdev copy ...passed 00:06:22.093 Suite: bdevio tests on: Nvme1n1 00:06:22.093 Test: blockdev write read block ...passed 00:06:22.093 Test: blockdev write zeroes read block ...passed 00:06:22.093 Test: blockdev write zeroes read no split ...passed 00:06:22.093 Test: blockdev write zeroes read split ...passed 00:06:22.093 Test: blockdev write zeroes read split partial ...passed 00:06:22.093 Test: blockdev reset ...[2024-11-20 09:18:47.392214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:22.093 [2024-11-20 09:18:47.395069] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:22.093 passed 00:06:22.093 Test: blockdev write read 8 blocks ...passed 00:06:22.093 Test: blockdev write read size > 128k ...passed 00:06:22.093 Test: blockdev write read invalid size ...passed 00:06:22.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.093 Test: blockdev write read max offset ...passed 00:06:22.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.093 Test: blockdev writev readv 8 blocks ...passed 00:06:22.093 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.093 Test: blockdev writev readv block ...passed 00:06:22.093 Test: blockdev writev readv size > 128k ...passed 00:06:22.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.093 Test: blockdev comparev and writev ...[2024-11-20 09:18:47.412183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4234000 len:0x1000 00:06:22.093 [2024-11-20 09:18:47.412227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.093 passed 00:06:22.093 Test: blockdev nvme passthru rw ...passed 00:06:22.093 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:18:47.414623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:22.093 [2024-11-20 09:18:47.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:22.093 passed 00:06:22.093 Test: blockdev nvme admin passthru ...passed 00:06:22.093 Test: blockdev copy ...passed 00:06:22.094 Suite: bdevio tests on: Nvme0n1 00:06:22.094 Test: blockdev write read block ...passed 00:06:22.094 Test: blockdev write zeroes read block ...passed 00:06:22.094 Test: blockdev write zeroes read no split ...passed 00:06:22.094 Test: blockdev write zeroes read split ...passed 00:06:22.094 Test: blockdev write zeroes read split partial ...passed 00:06:22.094 Test: blockdev reset ...[2024-11-20 09:18:47.476737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:22.094 [2024-11-20 09:18:47.482905] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:22.094 passed 00:06:22.094 Test: blockdev write read 8 blocks ...passed 00:06:22.094 Test: blockdev write read size > 128k ...passed 00:06:22.094 Test: blockdev write read invalid size ...passed 00:06:22.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.094 Test: blockdev write read max offset ...passed 00:06:22.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.094 Test: blockdev writev readv 8 blocks ...passed 00:06:22.094 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.094 Test: blockdev writev readv block ...passed 00:06:22.094 Test: blockdev writev readv size > 128k ...passed 00:06:22.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.094 Test: blockdev comparev and writev ...passed 00:06:22.094 Test: blockdev nvme passthru rw ...[2024-11-20 09:18:47.498927] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:22.094 separate metadata which is not supported yet. 00:06:22.094 passed 00:06:22.094 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:18:47.500686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:22.094 [2024-11-20 09:18:47.500725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:22.094 passed 00:06:22.094 Test: blockdev nvme admin passthru ...passed 00:06:22.094 Test: blockdev copy ...passed 00:06:22.094 00:06:22.094 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.094 suites 6 6 n/a 0 0 00:06:22.094 tests 138 138 138 0 0 00:06:22.094 asserts 893 893 893 0 n/a 00:06:22.094 00:06:22.094 Elapsed time = 1.236 seconds 00:06:22.094 0 00:06:22.094 09:18:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59962 00:06:22.094 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59962 ']' 00:06:22.094 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59962 00:06:22.094 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:22.094 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.094 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59962 00:06:22.351 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.351 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.351 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59962' 00:06:22.352 killing process with pid 59962 00:06:22.352 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59962 00:06:22.352 09:18:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59962 00:06:22.916 09:18:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:22.916 00:06:22.916 real 0m2.208s 00:06:22.916 user 0m5.545s 00:06:22.916 sys 0m0.285s 00:06:22.916 09:18:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.916 09:18:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:22.916 ************************************ 00:06:22.916 END TEST bdev_bounds 00:06:22.916 ************************************ 00:06:22.916 09:18:48 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:22.916 09:18:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:22.916 09:18:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.916 09:18:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.916 ************************************ 00:06:22.916 START TEST bdev_nbd 00:06:22.916 ************************************ 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:22.916 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60016 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60016 /var/tmp/spdk-nbd.sock 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60016 ']' 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:22.917 09:18:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:22.917 [2024-11-20 09:18:48.349887] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:22.917 [2024-11-20 09:18:48.350007] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.174 [2024-11-20 09:18:48.519350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.431 [2024-11-20 09:18:48.628549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:23.997 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.254 1+0 records in 00:06:24.254 1+0 records out 00:06:24.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112977 s, 3.6 MB/s 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.254 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:24.255 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.512 1+0 records in 00:06:24.512 1+0 records out 00:06:24.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842232 s, 4.9 MB/s 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.512 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.513 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.513 1+0 records in 00:06:24.513 1+0 records out 00:06:24.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117391 s, 3.5 MB/s 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:24.770 09:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.770 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.771 1+0 records in 00:06:24.771 1+0 records out 00:06:24.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114361 s, 3.6 MB/s 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:24.771 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.029 1+0 records in 00:06:25.029 1+0 records out 00:06:25.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561284 s, 7.3 MB/s 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:25.029 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.288 1+0 records in 00:06:25.288 1+0 records out 00:06:25.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755112 s, 5.4 MB/s 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:25.288 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.546 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:25.546 { 00:06:25.546 "nbd_device": "/dev/nbd0", 00:06:25.546 "bdev_name": "Nvme0n1" 00:06:25.546 }, 00:06:25.546 { 00:06:25.546 "nbd_device": "/dev/nbd1", 00:06:25.546 "bdev_name": "Nvme1n1" 00:06:25.546 }, 00:06:25.546 { 00:06:25.546 "nbd_device": "/dev/nbd2", 00:06:25.547 "bdev_name": "Nvme2n1" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd3", 00:06:25.547 "bdev_name": "Nvme2n2" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd4", 00:06:25.547 "bdev_name": "Nvme2n3" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd5", 00:06:25.547 "bdev_name": "Nvme3n1" 00:06:25.547 } 00:06:25.547 ]' 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd0", 00:06:25.547 "bdev_name": "Nvme0n1" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd1", 00:06:25.547 "bdev_name": "Nvme1n1" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd2", 00:06:25.547 "bdev_name": "Nvme2n1" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd3", 00:06:25.547 "bdev_name": "Nvme2n2" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd4", 00:06:25.547 "bdev_name": "Nvme2n3" 00:06:25.547 }, 00:06:25.547 { 00:06:25.547 "nbd_device": "/dev/nbd5", 00:06:25.547 "bdev_name": "Nvme3n1" 00:06:25.547 } 00:06:25.547 ]' 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.547 09:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.804 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.805 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.062 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.063 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.321 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.578 09:18:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.836 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.095 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:27.353 /dev/nbd0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.353 1+0 records in 00:06:27.353 1+0 records out 00:06:27.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011082 s, 3.7 MB/s 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:27.353 09:18:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:27.612 /dev/nbd1 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.612 1+0 records in 00:06:27.612 1+0 records out 00:06:27.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941872 s, 4.3 MB/s 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:27.612 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:27.870 /dev/nbd10 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.870 1+0 records in 00:06:27.870 1+0 records out 00:06:27.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114722 s, 3.6 MB/s 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:27.870 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:28.141 /dev/nbd11 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.141 1+0 records in 00:06:28.141 1+0 records out 00:06:28.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112309 s, 3.6 MB/s 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:28.141 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:28.427 /dev/nbd12 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.427 1+0 records in 00:06:28.427 1+0 records out 00:06:28.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113471 s, 3.6 MB/s 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.427 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.428 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:28.428 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:28.686 /dev/nbd13 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.686 1+0 records in 00:06:28.686 1+0 records out 00:06:28.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973264 s, 4.2 MB/s 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.686 09:18:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.944 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.944 { 00:06:28.944 "nbd_device": "/dev/nbd0", 00:06:28.944 "bdev_name": "Nvme0n1" 00:06:28.944 }, 00:06:28.944 { 00:06:28.944 "nbd_device": "/dev/nbd1", 00:06:28.944 "bdev_name": "Nvme1n1" 00:06:28.944 }, 00:06:28.944 { 00:06:28.944 "nbd_device": "/dev/nbd10", 00:06:28.944 "bdev_name": "Nvme2n1" 00:06:28.944 }, 00:06:28.944 { 00:06:28.944 "nbd_device": "/dev/nbd11", 00:06:28.944 "bdev_name": "Nvme2n2" 00:06:28.944 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd12", 00:06:28.945 "bdev_name": "Nvme2n3" 00:06:28.945 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd13", 00:06:28.945 "bdev_name": "Nvme3n1" 00:06:28.945 } 00:06:28.945 ]' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd0", 00:06:28.945 "bdev_name": "Nvme0n1" 00:06:28.945 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd1", 00:06:28.945 "bdev_name": "Nvme1n1" 00:06:28.945 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd10", 00:06:28.945 "bdev_name": "Nvme2n1" 00:06:28.945 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd11", 00:06:28.945 "bdev_name": "Nvme2n2" 00:06:28.945 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd12", 00:06:28.945 "bdev_name": "Nvme2n3" 00:06:28.945 }, 00:06:28.945 { 00:06:28.945 "nbd_device": "/dev/nbd13", 00:06:28.945 "bdev_name": "Nvme3n1" 00:06:28.945 } 00:06:28.945 ]' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.945 /dev/nbd1 00:06:28.945 /dev/nbd10 00:06:28.945 /dev/nbd11 00:06:28.945 /dev/nbd12 00:06:28.945 /dev/nbd13' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.945 /dev/nbd1 00:06:28.945 /dev/nbd10 00:06:28.945 /dev/nbd11 00:06:28.945 /dev/nbd12 00:06:28.945 /dev/nbd13' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:28.945 256+0 records in 00:06:28.945 256+0 records out 00:06:28.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725932 s, 144 MB/s 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.945 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.203 256+0 records in 00:06:29.203 256+0 records out 00:06:29.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240026 s, 4.4 MB/s 00:06:29.203 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.203 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.462 256+0 records in 00:06:29.462 256+0 records out 00:06:29.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.261035 s, 4.0 MB/s 00:06:29.462 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.462 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:29.720 256+0 records in 00:06:29.720 256+0 records out 00:06:29.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2081 s, 5.0 MB/s 00:06:29.720 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.720 09:18:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:29.978 256+0 records in 00:06:29.978 256+0 records out 00:06:29.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256044 s, 4.1 MB/s 00:06:29.978 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.978 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:30.236 256+0 records in 00:06:30.236 256+0 records out 00:06:30.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257818 s, 4.1 MB/s 00:06:30.236 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.236 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:30.494 256+0 records in 00:06:30.494 256+0 records out 00:06:30.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.254016 s, 4.1 MB/s 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.494 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.752 09:18:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.752 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.010 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.268 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.525 09:18:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:31.782 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:31.782 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:31.782 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:31.782 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.783 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:32.040 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:32.297 malloc_lvol_verify 00:06:32.297 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:32.555 78f3b549-07e9-4eb4-8e6d-1345971b16c2 00:06:32.555 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:32.555 0fab6da4-373f-426a-b442-e175062901fc 00:06:32.556 09:18:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:32.813 /dev/nbd0 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:32.813 mke2fs 1.47.0 (5-Feb-2023) 00:06:32.813 Discarding device blocks: 0/4096 done 00:06:32.813 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:32.813 00:06:32.813 Allocating group tables: 0/1 done 00:06:32.813 Writing inode tables: 0/1 done 00:06:32.813 Creating journal (1024 blocks): done 00:06:32.813 Writing superblocks and filesystem accounting information: 0/1 done 00:06:32.813 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.813 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60016 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60016 ']' 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60016 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60016 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.072 killing process with pid 60016 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60016' 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60016 00:06:33.072 09:18:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60016 00:06:34.007 09:18:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:34.007 00:06:34.007 real 0m10.926s 00:06:34.007 user 0m15.040s 00:06:34.007 sys 0m3.331s 00:06:34.007 09:18:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.007 ************************************ 00:06:34.007 END TEST bdev_nbd 00:06:34.007 ************************************ 00:06:34.007 09:18:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:34.007 09:18:59 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:34.008 09:18:59 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:34.008 skipping fio tests on NVMe due to multi-ns failures. 00:06:34.008 09:18:59 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:34.008 09:18:59 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:34.008 09:18:59 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:34.008 09:18:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:34.008 09:18:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.008 09:18:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 ************************************ 00:06:34.008 START TEST bdev_verify 00:06:34.008 ************************************ 00:06:34.008 09:18:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:34.008 [2024-11-20 09:18:59.339444] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:34.008 [2024-11-20 09:18:59.339563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:06:34.265 [2024-11-20 09:18:59.496421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.265 [2024-11-20 09:18:59.597477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.265 [2024-11-20 09:18:59.597603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.830 Running I/O for 5 seconds... 00:06:37.137 19328.00 IOPS, 75.50 MiB/s [2024-11-20T09:19:03.526Z] 19136.00 IOPS, 74.75 MiB/s [2024-11-20T09:19:04.459Z] 18837.33 IOPS, 73.58 MiB/s [2024-11-20T09:19:05.392Z] 18736.00 IOPS, 73.19 MiB/s [2024-11-20T09:19:05.392Z] 18713.60 IOPS, 73.10 MiB/s 00:06:39.936 Latency(us) 00:06:39.936 [2024-11-20T09:19:05.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.936 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.936 Verification LBA range: start 0x0 length 0xbd0bd 00:06:39.936 Nvme0n1 : 5.06 1491.52 5.83 0.00 0.00 85428.39 14216.27 94371.84 00:06:39.936 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.936 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:39.936 Nvme0n1 : 5.07 1564.37 6.11 0.00 0.00 81527.14 16938.54 104857.60 00:06:39.936 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.936 Verification LBA range: start 0x0 length 0xa0000 00:06:39.936 Nvme1n1 : 5.06 1491.09 5.82 0.00 0.00 85331.74 17341.83 84289.38 00:06:39.936 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0xa0000 length 0xa0000 00:06:39.937 Nvme1n1 : 5.07 1563.91 6.11 0.00 0.00 81267.89 16434.41 86305.87 00:06:39.937 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x0 length 0x80000 00:06:39.937 Nvme2n1 : 5.09 1495.00 5.84 0.00 0.00 84598.37 7713.08 78643.20 00:06:39.937 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x80000 length 0x80000 00:06:39.937 Nvme2n1 : 5.08 1563.46 6.11 0.00 0.00 80971.54 15325.34 71787.13 00:06:39.937 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x0 length 0x80000 00:06:39.937 Nvme2n2 : 5.11 1503.22 5.87 0.00 0.00 84142.91 10939.47 80256.39 00:06:39.937 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x80000 length 0x80000 00:06:39.937 Nvme2n2 : 5.08 1563.04 6.11 0.00 0.00 80865.92 15022.87 72997.02 00:06:39.937 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x0 length 0x80000 00:06:39.937 Nvme2n3 : 5.11 1502.79 5.87 0.00 0.00 84008.40 11342.77 82676.18 00:06:39.937 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x80000 length 0x80000 00:06:39.937 Nvme2n3 : 5.09 1570.94 6.14 0.00 0.00 80325.91 6704.84 70173.93 00:06:39.937 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x0 length 0x20000 00:06:39.937 Nvme3n1 : 5.11 1501.73 5.87 0.00 0.00 83864.99 11998.13 83079.48 00:06:39.937 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.937 Verification LBA range: start 0x20000 length 0x20000 00:06:39.937 Nvme3n1 : 5.11 1578.01 6.16 0.00 0.00 79882.44 10082.46 72190.42 00:06:39.937 [2024-11-20T09:19:05.393Z] =================================================================================================================== 00:06:39.937 [2024-11-20T09:19:05.393Z] Total : 18389.09 71.83 0.00 0.00 82639.68 6704.84 104857.60 00:06:42.463 00:06:42.463 real 0m8.199s 00:06:42.463 user 0m15.416s 00:06:42.463 sys 0m0.240s 00:06:42.463 09:19:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.463 ************************************ 00:06:42.463 END TEST bdev_verify 00:06:42.463 ************************************ 00:06:42.464 09:19:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.464 09:19:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:42.464 09:19:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:42.464 09:19:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.464 09:19:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.464 ************************************ 00:06:42.464 START TEST bdev_verify_big_io 00:06:42.464 ************************************ 00:06:42.464 09:19:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:42.464 [2024-11-20 09:19:07.614749] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:42.464 [2024-11-20 09:19:07.614867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:06:42.464 [2024-11-20 09:19:07.774624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.464 [2024-11-20 09:19:07.880084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.464 [2024-11-20 09:19:07.880242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.396 Running I/O for 5 seconds... 00:06:49.262 1237.00 IOPS, 77.31 MiB/s [2024-11-20T09:19:14.718Z] 2320.00 IOPS, 145.00 MiB/s [2024-11-20T09:19:14.718Z] 2696.33 IOPS, 168.52 MiB/s 00:06:49.262 Latency(us) 00:06:49.262 [2024-11-20T09:19:14.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.262 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x0 length 0xbd0b 00:06:49.262 Nvme0n1 : 5.78 105.22 6.58 0.00 0.00 1170891.56 11846.89 1122782.92 00:06:49.262 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:49.262 Nvme0n1 : 5.76 115.93 7.25 0.00 0.00 1063274.43 25609.45 967916.31 00:06:49.262 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x0 length 0xa000 00:06:49.262 Nvme1n1 : 5.78 102.58 6.41 0.00 0.00 1151311.01 105664.20 967916.31 00:06:49.262 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0xa000 length 0xa000 00:06:49.262 Nvme1n1 : 5.76 114.67 7.17 0.00 0.00 1040088.43 116149.96 961463.53 00:06:49.262 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x0 length 0x8000 00:06:49.262 Nvme2n1 : 5.87 106.53 6.66 0.00 0.00 1077998.07 56461.78 1361535.61 00:06:49.262 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x8000 length 0x8000 00:06:49.262 Nvme2n1 : 5.78 121.71 7.61 0.00 0.00 972134.11 16636.06 987274.63 00:06:49.262 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x0 length 0x8000 00:06:49.262 Nvme2n2 : 5.87 105.47 6.59 0.00 0.00 1057873.69 26819.35 2090699.22 00:06:49.262 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x8000 length 0x8000 00:06:49.262 Nvme2n2 : 5.79 121.67 7.60 0.00 0.00 944432.91 17341.83 1013085.74 00:06:49.262 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x0 length 0x8000 00:06:49.262 Nvme2n3 : 5.92 117.21 7.33 0.00 0.00 920098.82 12351.02 2116510.33 00:06:49.262 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x8000 length 0x8000 00:06:49.262 Nvme2n3 : 5.83 127.44 7.96 0.00 0.00 878914.23 33473.77 1032444.06 00:06:49.262 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x0 length 0x2000 00:06:49.262 Nvme3n1 : 5.97 152.31 9.52 0.00 0.00 694794.77 453.71 2168132.53 00:06:49.262 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:49.262 Verification LBA range: start 0x2000 length 0x2000 00:06:49.262 Nvme3n1 : 5.84 135.29 8.46 0.00 0.00 805854.86 5494.94 1038896.84 00:06:49.262 [2024-11-20T09:19:14.718Z] =================================================================================================================== 00:06:49.262 [2024-11-20T09:19:14.718Z] Total : 1426.03 89.13 0.00 0.00 965344.57 453.71 2168132.53 00:06:51.824 00:06:51.824 real 0m9.212s 00:06:51.824 user 0m17.424s 00:06:51.824 sys 0m0.252s 00:06:51.824 09:19:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.824 ************************************ 00:06:51.824 END TEST bdev_verify_big_io 00:06:51.824 ************************************ 00:06:51.824 09:19:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:51.824 09:19:16 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.824 09:19:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.824 09:19:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.824 09:19:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.824 ************************************ 00:06:51.824 START TEST bdev_write_zeroes 00:06:51.824 ************************************ 00:06:51.824 09:19:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.824 [2024-11-20 09:19:16.882833] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:51.824 [2024-11-20 09:19:16.882957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:06:51.824 [2024-11-20 09:19:17.046499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.824 [2024-11-20 09:19:17.152769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.389 Running I/O for 1 seconds... 00:06:53.321 21188.00 IOPS, 82.77 MiB/s 00:06:53.321 Latency(us) 00:06:53.321 [2024-11-20T09:19:18.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.321 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.321 Nvme0n1 : 1.02 2579.18 10.07 0.00 0.00 49507.64 4839.58 382326.94 00:06:53.321 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.321 Nvme1n1 : 1.02 3884.69 15.17 0.00 0.00 32838.27 8973.39 217781.17 00:06:53.321 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.321 Nvme2n1 : 1.02 3819.46 14.92 0.00 0.00 33295.60 8872.57 219394.36 00:06:53.321 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.321 Nvme2n2 : 1.02 3877.45 15.15 0.00 0.00 32652.75 7108.14 219394.36 00:06:53.321 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.321 Nvme2n3 : 1.02 3810.41 14.88 0.00 0.00 33176.17 8318.03 219394.36 00:06:53.321 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.321 Nvme3n1 : 1.03 3868.39 15.11 0.00 0.00 32624.03 6452.78 211328.39 00:06:53.321 [2024-11-20T09:19:18.777Z] =================================================================================================================== 00:06:53.321 [2024-11-20T09:19:18.777Z] Total : 21839.57 85.31 0.00 0.00 34868.65 4839.58 382326.94 00:06:54.261 00:06:54.261 real 0m2.715s 00:06:54.261 user 0m2.407s 00:06:54.261 sys 0m0.193s 00:06:54.261 09:19:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.261 09:19:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:54.261 ************************************ 00:06:54.261 END TEST bdev_write_zeroes 00:06:54.261 ************************************ 00:06:54.261 09:19:19 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.261 09:19:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:54.261 09:19:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.261 09:19:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.261 ************************************ 00:06:54.261 START TEST bdev_json_nonenclosed 00:06:54.261 ************************************ 00:06:54.261 09:19:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.261 [2024-11-20 09:19:19.667077] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:54.261 [2024-11-20 09:19:19.667203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60667 ] 00:06:54.518 [2024-11-20 09:19:19.828534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.518 [2024-11-20 09:19:19.929622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.518 [2024-11-20 09:19:19.929706] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:54.518 [2024-11-20 09:19:19.929722] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:54.518 [2024-11-20 09:19:19.929731] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.776 00:06:54.776 real 0m0.512s 00:06:54.776 user 0m0.323s 00:06:54.776 sys 0m0.084s 00:06:54.776 ************************************ 00:06:54.776 END TEST bdev_json_nonenclosed 00:06:54.776 ************************************ 00:06:54.776 09:19:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.776 09:19:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:54.776 09:19:20 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.776 09:19:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:54.776 09:19:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.776 09:19:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.776 ************************************ 00:06:54.776 START TEST bdev_json_nonarray 00:06:54.776 ************************************ 00:06:54.776 09:19:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.033 [2024-11-20 09:19:20.239680] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:55.033 [2024-11-20 09:19:20.240238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:06:55.033 [2024-11-20 09:19:20.401535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.290 [2024-11-20 09:19:20.507425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.290 [2024-11-20 09:19:20.507509] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:55.290 [2024-11-20 09:19:20.507526] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:55.290 [2024-11-20 09:19:20.507536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.290 ************************************ 00:06:55.290 END TEST bdev_json_nonarray 00:06:55.290 ************************************ 00:06:55.290 00:06:55.290 real 0m0.516s 00:06:55.290 user 0m0.317s 00:06:55.290 sys 0m0.091s 00:06:55.290 09:19:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.290 09:19:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:55.549 09:19:20 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:55.549 00:06:55.549 real 0m39.536s 00:06:55.549 user 1m1.014s 00:06:55.549 sys 0m5.383s 00:06:55.549 ************************************ 00:06:55.549 END TEST blockdev_nvme 00:06:55.549 ************************************ 00:06:55.549 09:19:20 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.549 09:19:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.549 09:19:20 -- spdk/autotest.sh@209 -- # uname -s 00:06:55.549 09:19:20 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:55.549 09:19:20 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:55.549 09:19:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.549 09:19:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.549 09:19:20 -- common/autotest_common.sh@10 -- # set +x 00:06:55.549 ************************************ 00:06:55.549 START TEST blockdev_nvme_gpt 00:06:55.549 ************************************ 00:06:55.549 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:55.549 * Looking for test storage... 00:06:55.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:55.549 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.549 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.549 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.549 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:55.549 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.550 09:19:20 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60771 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60771 00:06:55.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60771 ']' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.550 09:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.550 09:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:55.808 [2024-11-20 09:19:21.056934] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:55.808 [2024-11-20 09:19:21.057056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60771 ] 00:06:55.808 [2024-11-20 09:19:21.209190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.065 [2024-11-20 09:19:21.310569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.630 09:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.630 09:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:56.630 09:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:56.630 09:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:56.631 09:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:56.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.146 Waiting for block devices as requested 00:06:57.146 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.146 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.146 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.404 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.707 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:02.707 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:02.707 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:02.708 BYT; 00:07:02.708 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:02.708 BYT; 00:07:02.708 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:02.708 09:19:27 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:02.708 09:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:03.648 The operation has completed successfully. 00:07:03.648 09:19:28 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:04.604 The operation has completed successfully. 00:07:04.604 09:19:29 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.432 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.432 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.691 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.691 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.691 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:05.691 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.691 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.691 [] 00:07:05.691 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.691 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:05.691 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:05.691 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:05.691 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:05.691 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:05.691 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.691 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.950 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.950 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:05.950 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.950 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.950 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.211 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.211 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:06.211 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.211 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.211 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:06.211 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:06.212 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a25d8807-ee06-4208-b766-ea065c62ef5e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a25d8807-ee06-4208-b766-ea065c62ef5e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e9ae4b5f-09d4-44af-a50b-b271ffa80983"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9ae4b5f-09d4-44af-a50b-b271ffa80983",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "80043573-afd7-47a6-a1f3-759ad8075dd4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80043573-afd7-47a6-a1f3-759ad8075dd4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3d4a4f2c-93e5-4329-8e4d-5d389da8a27b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d4a4f2c-93e5-4329-8e4d-5d389da8a27b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f58550ab-4d52-4b6b-a34e-a76346ca0a74"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f58550ab-4d52-4b6b-a34e-a76346ca0a74",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.212 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.212 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:06.212 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:06.212 09:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60771 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60771 ']' 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60771 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60771 00:07:06.212 killing process with pid 60771 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60771' 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60771 00:07:06.212 09:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60771 00:07:08.126 09:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:08.126 09:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.126 09:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:08.126 09:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.126 09:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.126 ************************************ 00:07:08.126 START TEST bdev_hello_world 00:07:08.126 ************************************ 00:07:08.126 09:19:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.126 [2024-11-20 09:19:33.241993] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:08.126 [2024-11-20 09:19:33.242116] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61397 ] 00:07:08.126 [2024-11-20 09:19:33.400824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.126 [2024-11-20 09:19:33.516554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.691 [2024-11-20 09:19:34.065537] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:08.691 [2024-11-20 09:19:34.065586] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:08.691 [2024-11-20 09:19:34.065610] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:08.691 [2024-11-20 09:19:34.068113] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:08.691 [2024-11-20 09:19:34.068502] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:08.691 [2024-11-20 09:19:34.068525] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:08.691 [2024-11-20 09:19:34.068937] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:08.691 00:07:08.691 [2024-11-20 09:19:34.068956] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:09.638 00:07:09.638 real 0m1.758s 00:07:09.638 user 0m1.453s 00:07:09.638 sys 0m0.196s 00:07:09.638 ************************************ 00:07:09.638 END TEST bdev_hello_world 00:07:09.638 09:19:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.638 09:19:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 ************************************ 00:07:09.638 09:19:34 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:09.638 09:19:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.638 09:19:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.638 09:19:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 ************************************ 00:07:09.638 START TEST bdev_bounds 00:07:09.638 ************************************ 00:07:09.638 09:19:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:09.638 Process bdevio pid: 61439 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61439 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61439' 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61439 00:07:09.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61439 ']' 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.638 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 [2024-11-20 09:19:35.066094] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:09.638 [2024-11-20 09:19:35.066650] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:07:09.895 [2024-11-20 09:19:35.227264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.895 [2024-11-20 09:19:35.333705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.895 [2024-11-20 09:19:35.334137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.896 [2024-11-20 09:19:35.334137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.461 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.461 09:19:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:10.461 09:19:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:10.721 I/O targets: 00:07:10.721 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:10.721 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:10.721 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:10.721 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.721 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.721 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.721 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:10.721 00:07:10.721 00:07:10.721 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.721 http://cunit.sourceforge.net/ 00:07:10.721 00:07:10.721 00:07:10.721 Suite: bdevio tests on: Nvme3n1 00:07:10.721 Test: blockdev write read block ...passed 00:07:10.721 Test: blockdev write zeroes read block ...passed 00:07:10.721 Test: blockdev write zeroes read no split ...passed 00:07:10.721 Test: blockdev write zeroes read split ...passed 00:07:10.721 Test: blockdev write zeroes read split partial ...passed 00:07:10.721 Test: blockdev reset ...[2024-11-20 09:19:36.106819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:10.721 [2024-11-20 09:19:36.111108] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:10.721 passed 00:07:10.721 Test: blockdev write read 8 blocks ...passed 00:07:10.721 Test: blockdev write read size > 128k ...passed 00:07:10.721 Test: blockdev write read invalid size ...passed 00:07:10.721 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.721 Test: blockdev write read max offset ...passed 00:07:10.721 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.721 Test: blockdev writev readv 8 blocks ...passed 00:07:10.721 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.721 Test: blockdev writev readv block ...passed 00:07:10.721 Test: blockdev writev readv size > 128k ...passed 00:07:10.721 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.721 Test: blockdev comparev and writev ...[2024-11-20 09:19:36.130442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a04000 len:0x1000 00:07:10.721 [2024-11-20 09:19:36.130488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.721 passed 00:07:10.721 Test: blockdev nvme passthru rw ...passed 00:07:10.721 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:19:36.132916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.721 [2024-11-20 09:19:36.132948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.721 passed 00:07:10.721 Test: blockdev nvme admin passthru ...passed 00:07:10.721 Test: blockdev copy ...passed 00:07:10.721 Suite: bdevio tests on: Nvme2n3 00:07:10.721 Test: blockdev write read block ...passed 00:07:10.981 Test: blockdev write zeroes read block ...passed 00:07:10.981 Test: blockdev write zeroes read no split ...passed 00:07:10.981 Test: blockdev write zeroes read split ...passed 00:07:10.981 Test: blockdev write zeroes read split partial ...passed 00:07:10.981 Test: blockdev reset ...[2024-11-20 09:19:36.256043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.981 [2024-11-20 09:19:36.258939] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.981 passed 00:07:10.981 Test: blockdev write read 8 blocks ...passed 00:07:10.981 Test: blockdev write read size > 128k ...passed 00:07:10.981 Test: blockdev write read invalid size ...passed 00:07:10.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.981 Test: blockdev write read max offset ...passed 00:07:10.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.981 Test: blockdev writev readv 8 blocks ...passed 00:07:10.981 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.981 Test: blockdev writev readv block ...passed 00:07:10.981 Test: blockdev writev readv size > 128k ...passed 00:07:10.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.981 Test: blockdev comparev and writev ...[2024-11-20 09:19:36.270767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a02000 len:0x1000 00:07:10.981 [2024-11-20 09:19:36.270808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.981 passed 00:07:10.981 Test: blockdev nvme passthru rw ...passed 00:07:10.981 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:19:36.272339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.981 [2024-11-20 09:19:36.272368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.981 passed 00:07:10.981 Test: blockdev nvme admin passthru ...passed 00:07:10.981 Test: blockdev copy ...passed 00:07:10.981 Suite: bdevio tests on: Nvme2n2 00:07:10.981 Test: blockdev write read block ...passed 00:07:10.981 Test: blockdev write zeroes read block ...passed 00:07:10.981 Test: blockdev write zeroes read no split ...passed 00:07:10.981 Test: blockdev write zeroes read split ...passed 00:07:10.981 Test: blockdev write zeroes read split partial ...passed 00:07:10.981 Test: blockdev reset ...[2024-11-20 09:19:36.327154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.981 [2024-11-20 09:19:36.329965] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.981 passed 00:07:10.981 Test: blockdev write read 8 blocks ...passed 00:07:10.981 Test: blockdev write read size > 128k ...passed 00:07:10.981 Test: blockdev write read invalid size ...passed 00:07:10.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.981 Test: blockdev write read max offset ...passed 00:07:10.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.981 Test: blockdev writev readv 8 blocks ...passed 00:07:10.981 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.981 Test: blockdev writev readv block ...passed 00:07:10.981 Test: blockdev writev readv size > 128k ...passed 00:07:10.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.981 Test: blockdev comparev and writev ...[2024-11-20 09:19:36.346928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbe38000 len:0x1000 00:07:10.981 [2024-11-20 09:19:36.346968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.981 passed 00:07:10.981 Test: blockdev nvme passthru rw ...passed 00:07:10.981 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:19:36.349221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.981 [2024-11-20 09:19:36.349251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.981 passed 00:07:10.981 Test: blockdev nvme admin passthru ...passed 00:07:10.981 Test: blockdev copy ...passed 00:07:10.981 Suite: bdevio tests on: Nvme2n1 00:07:10.981 Test: blockdev write read block ...passed 00:07:10.981 Test: blockdev write zeroes read block ...passed 00:07:10.981 Test: blockdev write zeroes read no split ...passed 00:07:10.981 Test: blockdev write zeroes read split ...passed 00:07:10.981 Test: blockdev write zeroes read split partial ...passed 00:07:10.981 Test: blockdev reset ...[2024-11-20 09:19:36.414712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.981 [2024-11-20 09:19:36.419193] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.981 passed 00:07:10.981 Test: blockdev write read 8 blocks ...passed 00:07:10.981 Test: blockdev write read size > 128k ...passed 00:07:10.981 Test: blockdev write read invalid size ...passed 00:07:10.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.982 Test: blockdev write read max offset ...passed 00:07:10.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.982 Test: blockdev writev readv 8 blocks ...passed 00:07:10.982 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.982 Test: blockdev writev readv block ...passed 00:07:10.982 Test: blockdev writev readv size > 128k ...passed 00:07:11.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.242 Test: blockdev comparev and writev ...[2024-11-20 09:19:36.436711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbe34000 len:0x1000 00:07:11.242 [2024-11-20 09:19:36.436753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.242 passed 00:07:11.242 Test: blockdev nvme passthru rw ...passed 00:07:11.242 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:19:36.439179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.242 [2024-11-20 09:19:36.439212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.242 passed 00:07:11.242 Test: blockdev nvme admin passthru ...passed 00:07:11.242 Test: blockdev copy ...passed 00:07:11.242 Suite: bdevio tests on: Nvme1n1p2 00:07:11.242 Test: blockdev write read block ...passed 00:07:11.242 Test: blockdev write zeroes read block ...passed 00:07:11.242 Test: blockdev write zeroes read no split ...passed 00:07:11.242 Test: blockdev write zeroes read split ...passed 00:07:11.242 Test: blockdev write zeroes read split partial ...passed 00:07:11.242 Test: blockdev reset ...[2024-11-20 09:19:36.507817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:11.242 [2024-11-20 09:19:36.513039] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:11.242 passed 00:07:11.242 Test: blockdev write read 8 blocks ...passed 00:07:11.242 Test: blockdev write read size > 128k ...passed 00:07:11.242 Test: blockdev write read invalid size ...passed 00:07:11.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.242 Test: blockdev write read max offset ...passed 00:07:11.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.242 Test: blockdev writev readv 8 blocks ...passed 00:07:11.242 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.242 Test: blockdev writev readv block ...passed 00:07:11.242 Test: blockdev writev readv size > 128k ...passed 00:07:11.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.242 Test: blockdev comparev and writev ...[2024-11-20 09:19:36.531846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dbe30000 len:0x1000 00:07:11.242 [2024-11-20 09:19:36.531884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.242 passed 00:07:11.242 Test: blockdev nvme passthru rw ...passed 00:07:11.242 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.242 Test: blockdev nvme admin passthru ...passed 00:07:11.242 Test: blockdev copy ...passed 00:07:11.242 Suite: bdevio tests on: Nvme1n1p1 00:07:11.242 Test: blockdev write read block ...passed 00:07:11.242 Test: blockdev write zeroes read block ...passed 00:07:11.242 Test: blockdev write zeroes read no split ...passed 00:07:11.242 Test: blockdev write zeroes read split ...passed 00:07:11.242 Test: blockdev write zeroes read split partial ...passed 00:07:11.242 Test: blockdev reset ...[2024-11-20 09:19:36.593872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:11.242 [2024-11-20 09:19:36.597899] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:11.242 passed 00:07:11.242 Test: blockdev write read 8 blocks ...passed 00:07:11.242 Test: blockdev write read size > 128k ...passed 00:07:11.242 Test: blockdev write read invalid size ...passed 00:07:11.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.242 Test: blockdev write read max offset ...passed 00:07:11.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.242 Test: blockdev writev readv 8 blocks ...passed 00:07:11.242 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.242 Test: blockdev writev readv block ...passed 00:07:11.242 Test: blockdev writev readv size > 128k ...passed 00:07:11.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.242 Test: blockdev comparev and writev ...[2024-11-20 09:19:36.614272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b6c0e000 len:0x1000 00:07:11.242 [2024-11-20 09:19:36.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.242 passed 00:07:11.242 Test: blockdev nvme passthru rw ...passed 00:07:11.242 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.242 Test: blockdev nvme admin passthru ...passed 00:07:11.242 Test: blockdev copy ...passed 00:07:11.242 Suite: bdevio tests on: Nvme0n1 00:07:11.242 Test: blockdev write read block ...passed 00:07:11.242 Test: blockdev write zeroes read block ...passed 00:07:11.242 Test: blockdev write zeroes read no split ...passed 00:07:11.242 Test: blockdev write zeroes read split ...passed 00:07:11.502 Test: blockdev write zeroes read split partial ...passed 00:07:11.502 Test: blockdev reset ...[2024-11-20 09:19:36.712038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:11.502 [2024-11-20 09:19:36.715212] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:11.502 passed 00:07:11.502 Test: blockdev write read 8 blocks ...passed 00:07:11.502 Test: blockdev write read size > 128k ...passed 00:07:11.502 Test: blockdev write read invalid size ...passed 00:07:11.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.502 Test: blockdev write read max offset ...passed 00:07:11.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.502 Test: blockdev writev readv 8 blocks ...passed 00:07:11.502 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.502 Test: blockdev writev readv block ...passed 00:07:11.502 Test: blockdev writev readv size > 128k ...passed 00:07:11.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.502 Test: blockdev comparev and writev ...passed 00:07:11.502 Test: blockdev nvme passthru rw ...[2024-11-20 09:19:36.729617] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:11.502 separate metadata which is not supported yet. 00:07:11.502 passed 00:07:11.502 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:19:36.731096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:11.502 [2024-11-20 09:19:36.731135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:11.502 passed 00:07:11.502 Test: blockdev nvme admin passthru ...passed 00:07:11.502 Test: blockdev copy ...passed 00:07:11.502 00:07:11.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.502 suites 7 7 n/a 0 0 00:07:11.502 tests 161 161 161 0 0 00:07:11.502 asserts 1025 1025 1025 0 n/a 00:07:11.502 00:07:11.502 Elapsed time = 1.777 seconds 00:07:11.502 0 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61439 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61439 ']' 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61439 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61439 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.502 killing process with pid 61439 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61439' 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61439 00:07:11.502 09:19:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61439 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:12.975 00:07:12.975 real 0m3.059s 00:07:12.975 user 0m7.838s 00:07:12.975 sys 0m0.339s 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.975 ************************************ 00:07:12.975 END TEST bdev_bounds 00:07:12.975 ************************************ 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.975 09:19:38 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:12.975 09:19:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:12.975 09:19:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.975 09:19:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.975 ************************************ 00:07:12.975 START TEST bdev_nbd 00:07:12.975 ************************************ 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61499 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61499 /var/tmp/spdk-nbd.sock 00:07:12.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61499 ']' 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.975 09:19:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:12.975 [2024-11-20 09:19:38.201878] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:12.975 [2024-11-20 09:19:38.202022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.975 [2024-11-20 09:19:38.365578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.249 [2024-11-20 09:19:38.500426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.822 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.081 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.082 1+0 records in 00:07:14.082 1+0 records out 00:07:14.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110171 s, 3.7 MB/s 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.082 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.342 1+0 records in 00:07:14.342 1+0 records out 00:07:14.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797449 s, 5.1 MB/s 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.342 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.603 1+0 records in 00:07:14.603 1+0 records out 00:07:14.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124329 s, 3.3 MB/s 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.603 09:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.864 1+0 records in 00:07:14.864 1+0 records out 00:07:14.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127043 s, 3.2 MB/s 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.864 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:15.129 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:15.129 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:15.129 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:15.129 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.130 1+0 records in 00:07:15.130 1+0 records out 00:07:15.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961237 s, 4.3 MB/s 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:15.130 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.391 1+0 records in 00:07:15.391 1+0 records out 00:07:15.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534249 s, 7.7 MB/s 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.391 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.649 1+0 records in 00:07:15.649 1+0 records out 00:07:15.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968674 s, 4.2 MB/s 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.649 09:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.649 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd0", 00:07:15.649 "bdev_name": "Nvme0n1" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd1", 00:07:15.649 "bdev_name": "Nvme1n1p1" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd2", 00:07:15.649 "bdev_name": "Nvme1n1p2" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd3", 00:07:15.649 "bdev_name": "Nvme2n1" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd4", 00:07:15.649 "bdev_name": "Nvme2n2" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd5", 00:07:15.649 "bdev_name": "Nvme2n3" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd6", 00:07:15.649 "bdev_name": "Nvme3n1" 00:07:15.649 } 00:07:15.649 ]' 00:07:15.649 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:15.649 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd0", 00:07:15.649 "bdev_name": "Nvme0n1" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd1", 00:07:15.649 "bdev_name": "Nvme1n1p1" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd2", 00:07:15.649 "bdev_name": "Nvme1n1p2" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd3", 00:07:15.649 "bdev_name": "Nvme2n1" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd4", 00:07:15.649 "bdev_name": "Nvme2n2" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd5", 00:07:15.649 "bdev_name": "Nvme2n3" 00:07:15.649 }, 00:07:15.649 { 00:07:15.649 "nbd_device": "/dev/nbd6", 00:07:15.649 "bdev_name": "Nvme3n1" 00:07:15.649 } 00:07:15.649 ]' 00:07:15.649 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.910 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.172 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.434 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.697 09:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:16.958 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:17.227 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.228 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.490 09:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:17.751 /dev/nbd0 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.751 1+0 records in 00:07:17.751 1+0 records out 00:07:17.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107325 s, 3.8 MB/s 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.751 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:18.011 /dev/nbd1 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.011 1+0 records in 00:07:18.011 1+0 records out 00:07:18.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738204 s, 5.5 MB/s 00:07:18.011 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.012 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:18.271 /dev/nbd10 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.271 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.271 1+0 records in 00:07:18.271 1+0 records out 00:07:18.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075883 s, 5.4 MB/s 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.272 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:18.533 /dev/nbd11 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.533 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.795 1+0 records in 00:07:18.795 1+0 records out 00:07:18.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000910169 s, 4.5 MB/s 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.795 09:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:18.795 /dev/nbd12 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.795 1+0 records in 00:07:18.795 1+0 records out 00:07:18.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125048 s, 3.3 MB/s 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.795 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:19.056 /dev/nbd13 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.056 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.056 1+0 records in 00:07:19.056 1+0 records out 00:07:19.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133473 s, 3.1 MB/s 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.057 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:19.317 /dev/nbd14 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.317 1+0 records in 00:07:19.317 1+0 records out 00:07:19.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00090853 s, 4.5 MB/s 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.317 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.577 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd0", 00:07:19.577 "bdev_name": "Nvme0n1" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd1", 00:07:19.577 "bdev_name": "Nvme1n1p1" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd10", 00:07:19.577 "bdev_name": "Nvme1n1p2" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd11", 00:07:19.577 "bdev_name": "Nvme2n1" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd12", 00:07:19.577 "bdev_name": "Nvme2n2" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd13", 00:07:19.577 "bdev_name": "Nvme2n3" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd14", 00:07:19.577 "bdev_name": "Nvme3n1" 00:07:19.577 } 00:07:19.577 ]' 00:07:19.577 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd0", 00:07:19.577 "bdev_name": "Nvme0n1" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd1", 00:07:19.577 "bdev_name": "Nvme1n1p1" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd10", 00:07:19.577 "bdev_name": "Nvme1n1p2" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd11", 00:07:19.577 "bdev_name": "Nvme2n1" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd12", 00:07:19.577 "bdev_name": "Nvme2n2" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd13", 00:07:19.577 "bdev_name": "Nvme2n3" 00:07:19.577 }, 00:07:19.577 { 00:07:19.577 "nbd_device": "/dev/nbd14", 00:07:19.577 "bdev_name": "Nvme3n1" 00:07:19.577 } 00:07:19.577 ]' 00:07:19.577 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.577 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.577 /dev/nbd1 00:07:19.577 /dev/nbd10 00:07:19.577 /dev/nbd11 00:07:19.577 /dev/nbd12 00:07:19.577 /dev/nbd13 00:07:19.577 /dev/nbd14' 00:07:19.577 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.577 09:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.577 /dev/nbd1 00:07:19.577 /dev/nbd10 00:07:19.577 /dev/nbd11 00:07:19.577 /dev/nbd12 00:07:19.577 /dev/nbd13 00:07:19.577 /dev/nbd14' 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.577 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.578 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.578 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:19.578 256+0 records in 00:07:19.578 256+0 records out 00:07:19.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067241 s, 156 MB/s 00:07:19.578 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.578 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.835 256+0 records in 00:07:19.835 256+0 records out 00:07:19.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265975 s, 3.9 MB/s 00:07:19.835 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.835 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.404 256+0 records in 00:07:20.404 256+0 records out 00:07:20.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.263211 s, 4.0 MB/s 00:07:20.404 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.404 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:20.404 256+0 records in 00:07:20.404 256+0 records out 00:07:20.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252169 s, 4.2 MB/s 00:07:20.404 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.404 09:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:21.778 256+0 records in 00:07:21.778 256+0 records out 00:07:21.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.17182 s, 895 kB/s 00:07:21.778 09:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.778 09:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:21.778 256+0 records in 00:07:21.778 256+0 records out 00:07:21.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175575 s, 6.0 MB/s 00:07:21.778 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.778 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:22.037 256+0 records in 00:07:22.037 256+0 records out 00:07:22.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255603 s, 4.1 MB/s 00:07:22.037 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.037 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:22.298 256+0 records in 00:07:22.298 256+0 records out 00:07:22.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256734 s, 4.1 MB/s 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:22.298 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.299 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.558 09:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.819 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.080 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.341 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:23.599 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:23.599 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:23.599 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:23.599 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.599 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.600 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:23.600 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.600 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.600 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.600 09:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.862 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.126 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.387 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.387 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:24.388 malloc_lvol_verify 00:07:24.388 09:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:24.647 acb12526-a46c-47a2-936a-5c3f05e415ae 00:07:24.647 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:24.907 30ef6ab2-5733-424e-b3fe-d376c3a4dcc7 00:07:24.907 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:25.166 /dev/nbd0 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:25.166 mke2fs 1.47.0 (5-Feb-2023) 00:07:25.166 Discarding device blocks: 0/4096 done 00:07:25.166 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:25.166 00:07:25.166 Allocating group tables: 0/1 done 00:07:25.166 Writing inode tables: 0/1 done 00:07:25.166 Creating journal (1024 blocks): done 00:07:25.166 Writing superblocks and filesystem accounting information: 0/1 done 00:07:25.166 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.166 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61499 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61499 ']' 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61499 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61499 00:07:25.424 killing process with pid 61499 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61499' 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61499 00:07:25.424 09:19:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61499 00:07:26.402 09:19:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:26.402 00:07:26.402 real 0m13.374s 00:07:26.402 user 0m17.412s 00:07:26.402 sys 0m4.514s 00:07:26.402 09:19:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.402 ************************************ 00:07:26.402 END TEST bdev_nbd 00:07:26.402 ************************************ 00:07:26.402 09:19:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:26.402 09:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:26.402 09:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:26.402 skipping fio tests on NVMe due to multi-ns failures. 00:07:26.402 09:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:26.402 09:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:26.402 09:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:26.402 09:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.402 09:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:26.402 09:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.402 09:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.402 ************************************ 00:07:26.402 START TEST bdev_verify 00:07:26.402 ************************************ 00:07:26.402 09:19:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.402 [2024-11-20 09:19:51.637486] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:26.402 [2024-11-20 09:19:51.637603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61936 ] 00:07:26.402 [2024-11-20 09:19:51.797441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.683 [2024-11-20 09:19:51.900024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.683 [2024-11-20 09:19:51.900108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.255 Running I/O for 5 seconds... 00:07:29.581 19200.00 IOPS, 75.00 MiB/s [2024-11-20T09:19:55.974Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-20T09:19:56.908Z] 19605.33 IOPS, 76.58 MiB/s [2024-11-20T09:19:57.894Z] 19856.00 IOPS, 77.56 MiB/s [2024-11-20T09:19:57.894Z] 19942.40 IOPS, 77.90 MiB/s 00:07:32.438 Latency(us) 00:07:32.438 [2024-11-20T09:19:57.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.438 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x0 length 0xbd0bd 00:07:32.438 Nvme0n1 : 5.07 1401.88 5.48 0.00 0.00 90847.61 8922.98 101227.91 00:07:32.438 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:32.438 Nvme0n1 : 5.05 1392.72 5.44 0.00 0.00 91541.00 21878.94 96388.33 00:07:32.438 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x0 length 0x4ff80 00:07:32.438 Nvme1n1p1 : 5.09 1409.08 5.50 0.00 0.00 90341.14 16535.24 97194.93 00:07:32.438 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:32.438 Nvme1n1p1 : 5.06 1392.31 5.44 0.00 0.00 91429.53 25306.98 90338.86 00:07:32.438 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x0 length 0x4ff7f 00:07:32.438 Nvme1n1p2 : 5.09 1408.68 5.50 0.00 0.00 90101.24 16535.24 80659.69 00:07:32.438 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:32.438 Nvme1n1p2 : 5.08 1398.64 5.46 0.00 0.00 90765.95 7612.26 80256.39 00:07:32.438 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x0 length 0x80000 00:07:32.438 Nvme2n1 : 5.09 1408.23 5.50 0.00 0.00 89951.47 16232.76 72997.02 00:07:32.438 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x80000 length 0x80000 00:07:32.438 Nvme2n1 : 5.08 1398.26 5.46 0.00 0.00 90549.69 8065.97 77030.01 00:07:32.438 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.438 Verification LBA range: start 0x0 length 0x80000 00:07:32.438 Nvme2n2 : 5.09 1407.84 5.50 0.00 0.00 89794.85 15829.46 77030.01 00:07:32.438 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.439 Verification LBA range: start 0x80000 length 0x80000 00:07:32.439 Nvme2n2 : 5.09 1407.75 5.50 0.00 0.00 89928.37 8670.92 77836.60 00:07:32.439 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.439 Verification LBA range: start 0x0 length 0x80000 00:07:32.439 Nvme2n3 : 5.09 1407.07 5.50 0.00 0.00 89654.41 16434.41 80659.69 00:07:32.439 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.439 Verification LBA range: start 0x80000 length 0x80000 00:07:32.439 Nvme2n3 : 5.09 1406.96 5.50 0.00 0.00 89776.44 10384.94 78239.90 00:07:32.439 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.439 Verification LBA range: start 0x0 length 0x20000 00:07:32.439 Nvme3n1 : 5.10 1406.36 5.49 0.00 0.00 89517.50 17845.96 82272.89 00:07:32.439 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.439 Verification LBA range: start 0x20000 length 0x20000 00:07:32.439 Nvme3n1 : 5.10 1406.23 5.49 0.00 0.00 89619.27 11998.13 82272.89 00:07:32.439 [2024-11-20T09:19:57.895Z] =================================================================================================================== 00:07:32.439 [2024-11-20T09:19:57.895Z] Total : 19652.00 76.77 0.00 0.00 90268.78 7612.26 101227.91 00:07:33.372 00:07:33.372 real 0m7.196s 00:07:33.372 user 0m13.402s 00:07:33.372 sys 0m0.227s 00:07:33.372 09:19:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.372 ************************************ 00:07:33.372 END TEST bdev_verify 00:07:33.372 ************************************ 00:07:33.372 09:19:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.372 09:19:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.372 09:19:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:33.372 09:19:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.372 09:19:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.630 ************************************ 00:07:33.630 START TEST bdev_verify_big_io 00:07:33.630 ************************************ 00:07:33.630 09:19:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.630 [2024-11-20 09:19:58.897097] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:33.630 [2024-11-20 09:19:58.897217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62034 ] 00:07:33.630 [2024-11-20 09:19:59.057787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.888 [2024-11-20 09:19:59.163002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.888 [2024-11-20 09:19:59.163215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.454 Running I/O for 5 seconds... 00:07:40.610 894.00 IOPS, 55.88 MiB/s [2024-11-20T09:20:06.324Z] 2121.50 IOPS, 132.59 MiB/s [2024-11-20T09:20:06.324Z] 2756.67 IOPS, 172.29 MiB/s 00:07:40.868 Latency(us) 00:07:40.868 [2024-11-20T09:20:06.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.868 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.868 Verification LBA range: start 0x0 length 0xbd0b 00:07:40.868 Nvme0n1 : 5.87 98.13 6.13 0.00 0.00 1256990.81 32263.88 1380893.93 00:07:40.868 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.868 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:40.868 Nvme0n1 : 5.81 99.12 6.19 0.00 0.00 1240239.13 23290.49 1393799.48 00:07:40.868 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.868 Verification LBA range: start 0x0 length 0x4ff8 00:07:40.868 Nvme1n1p1 : 5.87 98.06 6.13 0.00 0.00 1212751.38 92758.65 1193763.45 00:07:40.868 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.868 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:40.868 Nvme1n1p1 : 5.96 102.27 6.39 0.00 0.00 1160166.04 66140.95 1206669.00 00:07:40.868 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.868 Verification LBA range: start 0x0 length 0x4ff7 00:07:40.868 Nvme1n1p2 : 6.07 101.36 6.33 0.00 0.00 1129306.84 92758.65 1071160.71 00:07:40.868 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.868 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:40.869 Nvme1n1p2 : 5.97 103.42 6.46 0.00 0.00 1112435.40 66544.25 1013085.74 00:07:40.869 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x0 length 0x8000 00:07:40.869 Nvme2n1 : 6.07 101.01 6.31 0.00 0.00 1091086.49 93161.94 1103424.59 00:07:40.869 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x8000 length 0x8000 00:07:40.869 Nvme2n1 : 5.97 107.24 6.70 0.00 0.00 1045078.96 83886.08 1032444.06 00:07:40.869 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x0 length 0x8000 00:07:40.869 Nvme2n2 : 6.16 107.95 6.75 0.00 0.00 998107.59 89128.96 1129235.69 00:07:40.869 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x8000 length 0x8000 00:07:40.869 Nvme2n2 : 6.08 109.78 6.86 0.00 0.00 979254.62 85095.98 1064707.94 00:07:40.869 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x0 length 0x8000 00:07:40.869 Nvme2n3 : 6.25 118.16 7.39 0.00 0.00 888951.81 41136.44 1155046.79 00:07:40.869 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x8000 length 0x8000 00:07:40.869 Nvme2n3 : 6.26 78.80 4.92 0.00 0.00 1338598.72 5318.50 2529487.95 00:07:40.869 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x0 length 0x2000 00:07:40.869 Nvme3n1 : 6.27 127.17 7.95 0.00 0.00 798987.23 10183.29 1180857.90 00:07:40.869 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.869 Verification LBA range: start 0x2000 length 0x2000 00:07:40.869 Nvme3n1 : 6.26 84.84 5.30 0.00 0.00 1196504.89 6301.54 2555299.05 00:07:40.869 [2024-11-20T09:20:06.325Z] =================================================================================================================== 00:07:40.869 [2024-11-20T09:20:06.325Z] Total : 1437.32 89.83 0.00 0.00 1086533.48 5318.50 2555299.05 00:07:42.771 00:07:42.771 real 0m9.368s 00:07:42.771 user 0m17.765s 00:07:42.771 sys 0m0.260s 00:07:42.771 09:20:08 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.771 ************************************ 00:07:42.771 END TEST bdev_verify_big_io 00:07:42.771 ************************************ 00:07:42.771 09:20:08 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:43.031 09:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.031 09:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:43.031 09:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.031 09:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.031 ************************************ 00:07:43.031 START TEST bdev_write_zeroes 00:07:43.031 ************************************ 00:07:43.032 09:20:08 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.032 [2024-11-20 09:20:08.341098] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:43.032 [2024-11-20 09:20:08.341226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62149 ] 00:07:43.292 [2024-11-20 09:20:08.504135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.292 [2024-11-20 09:20:08.610605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.859 Running I/O for 1 seconds... 00:07:44.795 15828.00 IOPS, 61.83 MiB/s 00:07:44.795 Latency(us) 00:07:44.795 [2024-11-20T09:20:10.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.795 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme0n1 : 1.02 2335.18 9.12 0.00 0.00 54671.63 7763.50 771106.66 00:07:44.795 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme1n1p1 : 1.02 2437.71 9.52 0.00 0.00 52315.20 7965.14 713031.68 00:07:44.795 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme1n1p2 : 1.03 2433.78 9.51 0.00 0.00 52244.45 12199.78 632371.99 00:07:44.795 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme2n1 : 1.03 2368.62 9.25 0.00 0.00 53585.23 12451.84 703352.52 00:07:44.795 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme2n2 : 1.03 2428.19 9.49 0.00 0.00 52193.84 12653.49 706578.90 00:07:44.795 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme2n3 : 1.03 2363.08 9.23 0.00 0.00 53567.97 12401.43 709805.29 00:07:44.795 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.795 Nvme3n1 : 1.03 2360.37 9.22 0.00 0.00 53540.17 11695.66 709805.29 00:07:44.795 [2024-11-20T09:20:10.251Z] =================================================================================================================== 00:07:44.795 [2024-11-20T09:20:10.251Z] Total : 16726.92 65.34 0.00 0.00 53145.75 7763.50 771106.66 00:07:45.728 00:07:45.728 real 0m2.726s 00:07:45.728 user 0m2.425s 00:07:45.728 sys 0m0.185s 00:07:45.728 ************************************ 00:07:45.728 END TEST bdev_write_zeroes 00:07:45.728 09:20:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.728 09:20:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:45.728 ************************************ 00:07:45.728 09:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.728 09:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:45.728 09:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.728 09:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.728 ************************************ 00:07:45.728 START TEST bdev_json_nonenclosed 00:07:45.728 ************************************ 00:07:45.728 09:20:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.728 [2024-11-20 09:20:11.122536] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:45.728 [2024-11-20 09:20:11.122656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62202 ] 00:07:45.987 [2024-11-20 09:20:11.284514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.987 [2024-11-20 09:20:11.389516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.987 [2024-11-20 09:20:11.389594] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:45.987 [2024-11-20 09:20:11.389611] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.987 [2024-11-20 09:20:11.389620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.244 00:07:46.244 real 0m0.513s 00:07:46.245 user 0m0.319s 00:07:46.245 sys 0m0.089s 00:07:46.245 ************************************ 00:07:46.245 END TEST bdev_json_nonenclosed 00:07:46.245 ************************************ 00:07:46.245 09:20:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.245 09:20:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:46.245 09:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.245 09:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:46.245 09:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.245 09:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.245 ************************************ 00:07:46.245 START TEST bdev_json_nonarray 00:07:46.245 ************************************ 00:07:46.245 09:20:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.245 [2024-11-20 09:20:11.691016] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:46.245 [2024-11-20 09:20:11.691125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62228 ] 00:07:46.503 [2024-11-20 09:20:11.851503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.763 [2024-11-20 09:20:11.955537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.763 [2024-11-20 09:20:11.955625] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:46.763 [2024-11-20 09:20:11.955642] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.763 [2024-11-20 09:20:11.955651] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.763 00:07:46.763 real 0m0.515s 00:07:46.763 user 0m0.311s 00:07:46.763 sys 0m0.100s 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.763 ************************************ 00:07:46.763 END TEST bdev_json_nonarray 00:07:46.763 ************************************ 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:46.763 09:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:46.763 09:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:46.763 09:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:46.763 09:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.763 09:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.763 09:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.763 ************************************ 00:07:46.763 START TEST bdev_gpt_uuid 00:07:46.763 ************************************ 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62253 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62253 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62253 ']' 00:07:46.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.763 09:20:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.022 [2024-11-20 09:20:12.283921] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:47.023 [2024-11-20 09:20:12.284033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62253 ] 00:07:47.023 [2024-11-20 09:20:12.444728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.280 [2024-11-20 09:20:12.549600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.847 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.847 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:47.847 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.847 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.847 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.105 Some configs were skipped because the RPC state that can call them passed over. 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:48.105 { 00:07:48.105 "name": "Nvme1n1p1", 00:07:48.105 "aliases": [ 00:07:48.105 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:48.105 ], 00:07:48.105 "product_name": "GPT Disk", 00:07:48.105 "block_size": 4096, 00:07:48.105 "num_blocks": 655104, 00:07:48.105 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:48.105 "assigned_rate_limits": { 00:07:48.105 "rw_ios_per_sec": 0, 00:07:48.105 "rw_mbytes_per_sec": 0, 00:07:48.105 "r_mbytes_per_sec": 0, 00:07:48.105 "w_mbytes_per_sec": 0 00:07:48.105 }, 00:07:48.105 "claimed": false, 00:07:48.105 "zoned": false, 00:07:48.105 "supported_io_types": { 00:07:48.105 "read": true, 00:07:48.105 "write": true, 00:07:48.105 "unmap": true, 00:07:48.105 "flush": true, 00:07:48.105 "reset": true, 00:07:48.105 "nvme_admin": false, 00:07:48.105 "nvme_io": false, 00:07:48.105 "nvme_io_md": false, 00:07:48.105 "write_zeroes": true, 00:07:48.105 "zcopy": false, 00:07:48.105 "get_zone_info": false, 00:07:48.105 "zone_management": false, 00:07:48.105 "zone_append": false, 00:07:48.105 "compare": true, 00:07:48.105 "compare_and_write": false, 00:07:48.105 "abort": true, 00:07:48.105 "seek_hole": false, 00:07:48.105 "seek_data": false, 00:07:48.105 "copy": true, 00:07:48.105 "nvme_iov_md": false 00:07:48.105 }, 00:07:48.105 "driver_specific": { 00:07:48.105 "gpt": { 00:07:48.105 "base_bdev": "Nvme1n1", 00:07:48.105 "offset_blocks": 256, 00:07:48.105 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:48.105 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:48.105 "partition_name": "SPDK_TEST_first" 00:07:48.105 } 00:07:48.105 } 00:07:48.105 } 00:07:48.105 ]' 00:07:48.105 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:48.363 { 00:07:48.363 "name": "Nvme1n1p2", 00:07:48.363 "aliases": [ 00:07:48.363 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:48.363 ], 00:07:48.363 "product_name": "GPT Disk", 00:07:48.363 "block_size": 4096, 00:07:48.363 "num_blocks": 655103, 00:07:48.363 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.363 "assigned_rate_limits": { 00:07:48.363 "rw_ios_per_sec": 0, 00:07:48.363 "rw_mbytes_per_sec": 0, 00:07:48.363 "r_mbytes_per_sec": 0, 00:07:48.363 "w_mbytes_per_sec": 0 00:07:48.363 }, 00:07:48.363 "claimed": false, 00:07:48.363 "zoned": false, 00:07:48.363 "supported_io_types": { 00:07:48.363 "read": true, 00:07:48.363 "write": true, 00:07:48.363 "unmap": true, 00:07:48.363 "flush": true, 00:07:48.363 "reset": true, 00:07:48.363 "nvme_admin": false, 00:07:48.363 "nvme_io": false, 00:07:48.363 "nvme_io_md": false, 00:07:48.363 "write_zeroes": true, 00:07:48.363 "zcopy": false, 00:07:48.363 "get_zone_info": false, 00:07:48.363 "zone_management": false, 00:07:48.363 "zone_append": false, 00:07:48.363 "compare": true, 00:07:48.363 "compare_and_write": false, 00:07:48.363 "abort": true, 00:07:48.363 "seek_hole": false, 00:07:48.363 "seek_data": false, 00:07:48.363 "copy": true, 00:07:48.363 "nvme_iov_md": false 00:07:48.363 }, 00:07:48.363 "driver_specific": { 00:07:48.363 "gpt": { 00:07:48.363 "base_bdev": "Nvme1n1", 00:07:48.363 "offset_blocks": 655360, 00:07:48.363 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:48.363 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.363 "partition_name": "SPDK_TEST_second" 00:07:48.363 } 00:07:48.363 } 00:07:48.363 } 00:07:48.363 ]' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62253 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62253 ']' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62253 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.363 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62253 00:07:48.364 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.364 killing process with pid 62253 00:07:48.364 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.364 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62253' 00:07:48.364 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62253 00:07:48.364 09:20:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62253 00:07:50.269 00:07:50.269 real 0m3.088s 00:07:50.269 user 0m3.309s 00:07:50.269 sys 0m0.372s 00:07:50.269 09:20:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.269 ************************************ 00:07:50.269 END TEST bdev_gpt_uuid 00:07:50.269 ************************************ 00:07:50.269 09:20:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:50.269 09:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:50.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.528 Waiting for block devices as requested 00:07:50.528 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.528 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.528 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.790 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.112 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:56.112 09:20:21 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:56.112 09:20:21 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:56.112 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:56.112 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:56.112 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:56.112 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:56.112 09:20:21 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:56.113 00:07:56.113 real 1m0.624s 00:07:56.113 user 1m17.525s 00:07:56.113 sys 0m8.841s 00:07:56.113 09:20:21 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.113 ************************************ 00:07:56.113 END TEST blockdev_nvme_gpt 00:07:56.113 ************************************ 00:07:56.113 09:20:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.113 09:20:21 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:56.113 09:20:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.113 09:20:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.113 09:20:21 -- common/autotest_common.sh@10 -- # set +x 00:07:56.113 ************************************ 00:07:56.113 START TEST nvme 00:07:56.113 ************************************ 00:07:56.113 09:20:21 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:56.374 * Looking for test storage... 00:07:56.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.374 09:20:21 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.374 09:20:21 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.374 09:20:21 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.374 09:20:21 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.374 09:20:21 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.374 09:20:21 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:56.374 09:20:21 nvme -- scripts/common.sh@345 -- # : 1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.374 09:20:21 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.374 09:20:21 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@353 -- # local d=1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.374 09:20:21 nvme -- scripts/common.sh@355 -- # echo 1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.374 09:20:21 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@353 -- # local d=2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.374 09:20:21 nvme -- scripts/common.sh@355 -- # echo 2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.374 09:20:21 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.374 09:20:21 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.374 09:20:21 nvme -- scripts/common.sh@368 -- # return 0 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.374 --rc genhtml_branch_coverage=1 00:07:56.374 --rc genhtml_function_coverage=1 00:07:56.374 --rc genhtml_legend=1 00:07:56.374 --rc geninfo_all_blocks=1 00:07:56.374 --rc geninfo_unexecuted_blocks=1 00:07:56.374 00:07:56.374 ' 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.374 --rc genhtml_branch_coverage=1 00:07:56.374 --rc genhtml_function_coverage=1 00:07:56.374 --rc genhtml_legend=1 00:07:56.374 --rc geninfo_all_blocks=1 00:07:56.374 --rc geninfo_unexecuted_blocks=1 00:07:56.374 00:07:56.374 ' 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.374 --rc genhtml_branch_coverage=1 00:07:56.374 --rc genhtml_function_coverage=1 00:07:56.374 --rc genhtml_legend=1 00:07:56.374 --rc geninfo_all_blocks=1 00:07:56.374 --rc geninfo_unexecuted_blocks=1 00:07:56.374 00:07:56.374 ' 00:07:56.374 09:20:21 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.374 --rc genhtml_branch_coverage=1 00:07:56.374 --rc genhtml_function_coverage=1 00:07:56.374 --rc genhtml_legend=1 00:07:56.374 --rc geninfo_all_blocks=1 00:07:56.374 --rc geninfo_unexecuted_blocks=1 00:07:56.374 00:07:56.374 ' 00:07:56.374 09:20:21 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:56.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.521 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.521 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.521 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.521 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.521 09:20:22 nvme -- nvme/nvme.sh@79 -- # uname 00:07:57.521 09:20:22 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:57.521 09:20:22 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:57.521 09:20:22 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:57.521 Waiting for stub to ready for secondary processes... 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1075 -- # stubpid=62893 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62893 ]] 00:07:57.521 09:20:22 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:57.521 [2024-11-20 09:20:22.909877] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:57.521 [2024-11-20 09:20:22.910250] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:58.464 09:20:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:58.464 09:20:23 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62893 ]] 00:07:58.464 09:20:23 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:58.725 [2024-11-20 09:20:24.010910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.725 [2024-11-20 09:20:24.133557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.725 [2024-11-20 09:20:24.133841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.725 [2024-11-20 09:20:24.133916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.725 [2024-11-20 09:20:24.151617] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:58.725 [2024-11-20 09:20:24.151661] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:58.725 [2024-11-20 09:20:24.166818] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:58.725 [2024-11-20 09:20:24.166983] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:58.725 [2024-11-20 09:20:24.169801] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:58.725 [2024-11-20 09:20:24.170408] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:58.725 [2024-11-20 09:20:24.170499] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:58.725 [2024-11-20 09:20:24.173810] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:58.725 [2024-11-20 09:20:24.174115] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:58.725 [2024-11-20 09:20:24.174220] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:58.725 [2024-11-20 09:20:24.178003] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:58.986 [2024-11-20 09:20:24.178688] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:58.986 [2024-11-20 09:20:24.178760] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:58.986 [2024-11-20 09:20:24.178796] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:58.986 [2024-11-20 09:20:24.178828] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:59.598 done. 00:07:59.598 09:20:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:59.598 09:20:24 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:59.598 09:20:24 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:59.598 09:20:24 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:59.598 09:20:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.598 09:20:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.598 ************************************ 00:07:59.598 START TEST nvme_reset 00:07:59.598 ************************************ 00:07:59.598 09:20:24 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:59.860 Initializing NVMe Controllers 00:07:59.860 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:59.860 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:59.860 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:59.860 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:59.860 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:59.860 00:07:59.860 ************************************ 00:07:59.860 END TEST nvme_reset 00:07:59.860 ************************************ 00:07:59.860 real 0m0.234s 00:07:59.860 user 0m0.080s 00:07:59.860 sys 0m0.113s 00:07:59.860 09:20:25 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.860 09:20:25 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:59.860 09:20:25 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:59.860 09:20:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.860 09:20:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.860 09:20:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.860 ************************************ 00:07:59.860 START TEST nvme_identify 00:07:59.860 ************************************ 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:59.860 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:59.860 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:59.860 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:59.860 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:59.860 09:20:25 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:59.860 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:00.123 [2024-11-20 09:20:25.465036] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62926 terminated unexpected 00:08:00.123 ===================================================== 00:08:00.123 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.123 ===================================================== 00:08:00.123 Controller Capabilities/Features 00:08:00.123 ================================ 00:08:00.123 Vendor ID: 1b36 00:08:00.123 Subsystem Vendor ID: 1af4 00:08:00.123 Serial Number: 12343 00:08:00.123 Model Number: QEMU NVMe Ctrl 00:08:00.123 Firmware Version: 8.0.0 00:08:00.123 Recommended Arb Burst: 6 00:08:00.123 IEEE OUI Identifier: 00 54 52 00:08:00.123 Multi-path I/O 00:08:00.123 May have multiple subsystem ports: No 00:08:00.123 May have multiple controllers: Yes 00:08:00.123 Associated with SR-IOV VF: No 00:08:00.124 Max Data Transfer Size: 524288 00:08:00.124 Max Number of Namespaces: 256 00:08:00.124 Max Number of I/O Queues: 64 00:08:00.124 NVMe Specification Version (VS): 1.4 00:08:00.124 NVMe Specification Version (Identify): 1.4 00:08:00.124 Maximum Queue Entries: 2048 00:08:00.124 Contiguous Queues Required: Yes 00:08:00.124 Arbitration Mechanisms Supported 00:08:00.124 Weighted Round Robin: Not Supported 00:08:00.124 Vendor Specific: Not Supported 00:08:00.124 Reset Timeout: 7500 ms 00:08:00.124 Doorbell Stride: 4 bytes 00:08:00.124 NVM Subsystem Reset: Not Supported 00:08:00.124 Command Sets Supported 00:08:00.124 NVM Command Set: Supported 00:08:00.124 Boot Partition: Not Supported 00:08:00.124 Memory Page Size Minimum: 4096 bytes 00:08:00.124 Memory Page Size Maximum: 65536 bytes 00:08:00.124 Persistent Memory Region: Not Supported 00:08:00.124 Optional Asynchronous Events Supported 00:08:00.124 Namespace Attribute Notices: Supported 00:08:00.124 Firmware Activation Notices: Not Supported 00:08:00.124 ANA Change Notices: Not Supported 00:08:00.124 PLE Aggregate Log Change Notices: Not Supported 00:08:00.124 LBA Status Info Alert Notices: Not Supported 00:08:00.124 EGE Aggregate Log Change Notices: Not Supported 00:08:00.124 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.124 Zone Descriptor Change Notices: Not Supported 00:08:00.124 Discovery Log Change Notices: Not Supported 00:08:00.124 Controller Attributes 00:08:00.124 128-bit Host Identifier: Not Supported 00:08:00.124 Non-Operational Permissive Mode: Not Supported 00:08:00.124 NVM Sets: Not Supported 00:08:00.124 Read Recovery Levels: Not Supported 00:08:00.124 Endurance Groups: Supported 00:08:00.124 Predictable Latency Mode: Not Supported 00:08:00.124 Traffic Based Keep ALive: Not Supported 00:08:00.124 Namespace Granularity: Not Supported 00:08:00.124 SQ Associations: Not Supported 00:08:00.124 UUID List: Not Supported 00:08:00.124 Multi-Domain Subsystem: Not Supported 00:08:00.124 Fixed Capacity Management: Not Supported 00:08:00.124 Variable Capacity Management: Not Supported 00:08:00.124 Delete Endurance Group: Not Supported 00:08:00.124 Delete NVM Set: Not Supported 00:08:00.124 Extended LBA Formats Supported: Supported 00:08:00.124 Flexible Data Placement Supported: Supported 00:08:00.124 00:08:00.124 Controller Memory Buffer Support 00:08:00.124 ================================ 00:08:00.124 Supported: No 00:08:00.124 00:08:00.124 Persistent Memory Region Support 00:08:00.124 ================================ 00:08:00.124 Supported: No 00:08:00.124 00:08:00.124 Admin Command Set Attributes 00:08:00.124 ============================ 00:08:00.124 Security Send/Receive: Not Supported 00:08:00.124 Format NVM: Supported 00:08:00.124 Firmware Activate/Download: Not Supported 00:08:00.124 Namespace Management: Supported 00:08:00.124 Device Self-Test: Not Supported 00:08:00.124 Directives: Supported 00:08:00.124 NVMe-MI: Not Supported 00:08:00.124 Virtualization Management: Not Supported 00:08:00.124 Doorbell Buffer Config: Supported 00:08:00.124 Get LBA Status Capability: Not Supported 00:08:00.124 Command & Feature Lockdown Capability: Not Supported 00:08:00.124 Abort Command Limit: 4 00:08:00.124 Async Event Request Limit: 4 00:08:00.124 Number of Firmware Slots: N/A 00:08:00.124 Firmware Slot 1 Read-Only: N/A 00:08:00.124 Firmware Activation Without Reset: N/A 00:08:00.124 Multiple Update Detection Support: N/A 00:08:00.124 Firmware Update Granularity: No Information Provided 00:08:00.124 Per-Namespace SMART Log: Yes 00:08:00.124 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.124 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:00.124 Command Effects Log Page: Supported 00:08:00.124 Get Log Page Extended Data: Supported 00:08:00.124 Telemetry Log Pages: Not Supported 00:08:00.124 Persistent Event Log Pages: Not Supported 00:08:00.124 Supported Log Pages Log Page: May Support 00:08:00.124 Commands Supported & Effects Log Page: Not Supported 00:08:00.124 Feature Identifiers & Effects Log Page:May Support 00:08:00.124 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.124 Data Area 4 for Telemetry Log: Not Supported 00:08:00.124 Error Log Page Entries Supported: 1 00:08:00.124 Keep Alive: Not Supported 00:08:00.124 00:08:00.124 NVM Command Set Attributes 00:08:00.124 ========================== 00:08:00.124 Submission Queue Entry Size 00:08:00.124 Max: 64 00:08:00.124 Min: 64 00:08:00.124 Completion Queue Entry Size 00:08:00.124 Max: 16 00:08:00.124 Min: 16 00:08:00.124 Number of Namespaces: 256 00:08:00.124 Compare Command: Supported 00:08:00.124 Write Uncorrectable Command: Not Supported 00:08:00.124 Dataset Management Command: Supported 00:08:00.124 Write Zeroes Command: Supported 00:08:00.124 Set Features Save Field: Supported 00:08:00.124 Reservations: Not Supported 00:08:00.124 Timestamp: Supported 00:08:00.124 Copy: Supported 00:08:00.124 Volatile Write Cache: Present 00:08:00.124 Atomic Write Unit (Normal): 1 00:08:00.124 Atomic Write Unit (PFail): 1 00:08:00.124 Atomic Compare & Write Unit: 1 00:08:00.124 Fused Compare & Write: Not Supported 00:08:00.124 Scatter-Gather List 00:08:00.124 SGL Command Set: Supported 00:08:00.124 SGL Keyed: Not Supported 00:08:00.124 SGL Bit Bucket Descriptor: Not Supported 00:08:00.124 SGL Metadata Pointer: Not Supported 00:08:00.124 Oversized SGL: Not Supported 00:08:00.124 SGL Metadata Address: Not Supported 00:08:00.124 SGL Offset: Not Supported 00:08:00.124 Transport SGL Data Block: Not Supported 00:08:00.124 Replay Protected Memory Block: Not Supported 00:08:00.124 00:08:00.124 Firmware Slot Information 00:08:00.124 ========================= 00:08:00.124 Active slot: 1 00:08:00.124 Slot 1 Firmware Revision: 1.0 00:08:00.124 00:08:00.124 00:08:00.124 Commands Supported and Effects 00:08:00.124 ============================== 00:08:00.124 Admin Commands 00:08:00.124 -------------- 00:08:00.124 Delete I/O Submission Queue (00h): Supported 00:08:00.124 Create I/O Submission Queue (01h): Supported 00:08:00.124 Get Log Page (02h): Supported 00:08:00.124 Delete I/O Completion Queue (04h): Supported 00:08:00.124 Create I/O Completion Queue (05h): Supported 00:08:00.124 Identify (06h): Supported 00:08:00.124 Abort (08h): Supported 00:08:00.124 Set Features (09h): Supported 00:08:00.124 Get Features (0Ah): Supported 00:08:00.124 Asynchronous Event Request (0Ch): Supported 00:08:00.124 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.124 Directive Send (19h): Supported 00:08:00.124 Directive Receive (1Ah): Supported 00:08:00.124 Virtualization Management (1Ch): Supported 00:08:00.124 Doorbell Buffer Config (7Ch): Supported 00:08:00.124 Format NVM (80h): Supported LBA-Change 00:08:00.124 I/O Commands 00:08:00.124 ------------ 00:08:00.124 Flush (00h): Supported LBA-Change 00:08:00.124 Write (01h): Supported LBA-Change 00:08:00.124 Read (02h): Supported 00:08:00.124 Compare (05h): Supported 00:08:00.124 Write Zeroes (08h): Supported LBA-Change 00:08:00.124 Dataset Management (09h): Supported LBA-Change 00:08:00.124 Unknown (0Ch): Supported 00:08:00.124 Unknown (12h): Supported 00:08:00.124 Copy (19h): Supported LBA-Change 00:08:00.124 Unknown (1Dh): Supported LBA-Change 00:08:00.124 00:08:00.124 Error Log 00:08:00.124 ========= 00:08:00.124 00:08:00.124 Arbitration 00:08:00.124 =========== 00:08:00.124 Arbitration Burst: no limit 00:08:00.124 00:08:00.124 Power Management 00:08:00.124 ================ 00:08:00.124 Number of Power States: 1 00:08:00.124 Current Power State: Power State #0 00:08:00.124 Power State #0: 00:08:00.124 Max Power: 25.00 W 00:08:00.124 Non-Operational State: Operational 00:08:00.124 Entry Latency: 16 microseconds 00:08:00.124 Exit Latency: 4 microseconds 00:08:00.124 Relative Read Throughput: 0 00:08:00.124 Relative Read Latency: 0 00:08:00.124 Relative Write Throughput: 0 00:08:00.124 Relative Write Latency: 0 00:08:00.124 Idle Power: Not Reported 00:08:00.124 Active Power: Not Reported 00:08:00.124 Non-Operational Permissive Mode: Not Supported 00:08:00.124 00:08:00.124 Health Information 00:08:00.125 ================== 00:08:00.125 Critical Warnings: 00:08:00.125 Available Spare Space: OK 00:08:00.125 Temperature: OK 00:08:00.125 Device Reliability: OK 00:08:00.125 Read Only: No 00:08:00.125 Volatile Memory Backup: OK 00:08:00.125 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.125 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.125 Available Spare: 0% 00:08:00.125 Available Spare Threshold: 0% 00:08:00.125 Life Percentage Used: 0% 00:08:00.125 Data Units Read: 708 00:08:00.125 Data Units Written: 637 00:08:00.125 Host Read Commands: 34231 00:08:00.125 Host Write Commands: 33656 00:08:00.125 Controller Busy Time: 0 minutes 00:08:00.125 Power Cycles: 0 00:08:00.125 Power On Hours: 0 hours 00:08:00.125 Unsafe Shutdowns: 0 00:08:00.125 Unrecoverable Media Errors: 0 00:08:00.125 Lifetime Error Log Entries: 0 00:08:00.125 Warning Temperature Time: 0 minutes 00:08:00.125 Critical Temperature Time: 0 minutes 00:08:00.125 00:08:00.125 Number of Queues 00:08:00.125 ================ 00:08:00.125 Number of I/O Submission Queues: 64 00:08:00.125 Number of I/O Completion Queues: 64 00:08:00.125 00:08:00.125 ZNS Specific Controller Data 00:08:00.125 ============================ 00:08:00.125 Zone Append Size Limit: 0 00:08:00.125 00:08:00.125 00:08:00.125 Active Namespaces 00:08:00.125 ================= 00:08:00.125 Namespace ID:1 00:08:00.125 Error Recovery Timeout: Unlimited 00:08:00.125 Command Set Identifier: NVM (00h) 00:08:00.125 Deallocate: Supported 00:08:00.125 Deallocated/Unwritten Error: Supported 00:08:00.125 Deallocated Read Value: All 0x00 00:08:00.125 Deallocate in Write Zeroes: Not Supported 00:08:00.125 Deallocated Guard Field: 0xFFFF 00:08:00.125 Flush: Supported 00:08:00.125 Reservation: Not Supported 00:08:00.125 Namespace Sharing Capabilities: Multiple Controllers 00:08:00.125 Size (in LBAs): 262144 (1GiB) 00:08:00.125 Capacity (in LBAs): 262144 (1GiB) 00:08:00.125 Utilization (in LBAs): 262144 (1GiB) 00:08:00.125 Thin Provisioning: Not Supported 00:08:00.125 Per-NS Atomic Units: No 00:08:00.125 Maximum Single Source Range Length: 128 00:08:00.125 Maximum Copy Length: 128 00:08:00.125 Maximum Source Range Count: 128 00:08:00.125 NGUID/EUI64 Never Reused: No 00:08:00.125 Namespace Write Protected: No 00:08:00.125 Endurance group ID: 1 00:08:00.125 Number of LBA Formats: 8 00:08:00.125 Current LBA Format: LBA Format #04 00:08:00.125 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.125 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.125 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.125 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.125 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.125 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.125 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.125 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.125 00:08:00.125 Get Feature FDP: 00:08:00.125 ================ 00:08:00.125 Enabled: Yes 00:08:00.125 FDP configuration index: 0 00:08:00.125 00:08:00.125 FDP configurations log page 00:08:00.125 =========================== 00:08:00.125 Number of FDP configurations: 1 00:08:00.125 Version: 0 00:08:00.125 Size: 112 00:08:00.125 FDP Configuration Descriptor: 0 00:08:00.125 Descriptor Size: 96 00:08:00.125 Reclaim Group Identifier format: 2 00:08:00.125 FDP Volatile Write Cache: Not Present 00:08:00.125 FDP Configuration: Valid 00:08:00.125 Vendor Specific Size: 0 00:08:00.125 Number of Reclaim Groups: 2 00:08:00.125 Number of Recalim Unit Handles: 8 00:08:00.125 Max Placement Identifiers: 128 00:08:00.125 Number of Namespaces Suppprted: 256 00:08:00.125 Reclaim unit Nominal Size: 6000000 bytes 00:08:00.125 Estimated Reclaim Unit Time Limit: Not Reported 00:08:00.125 RUH Desc #000: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #001: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #002: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #003: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #004: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #005: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #006: RUH Type: Initially Isolated 00:08:00.125 RUH Desc #007: RUH Type: Initially Isolated 00:08:00.125 00:08:00.125 FDP reclaim unit handle usage log page 00:08:00.125 ==================================[2024-11-20 09:20:25.467678] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62926 terminated unexpected 00:08:00.125 ==== 00:08:00.125 Number of Reclaim Unit Handles: 8 00:08:00.125 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:00.125 RUH Usage Desc #001: RUH Attributes: Unused 00:08:00.125 RUH Usage Desc #002: RUH Attributes: Unused 00:08:00.125 RUH Usage Desc #003: RUH Attributes: Unused 00:08:00.125 RUH Usage Desc #004: RUH Attributes: Unused 00:08:00.125 RUH Usage Desc #005: RUH Attributes: Unused 00:08:00.125 RUH Usage Desc #006: RUH Attributes: Unused 00:08:00.125 RUH Usage Desc #007: RUH Attributes: Unused 00:08:00.125 00:08:00.125 FDP statistics log page 00:08:00.125 ======================= 00:08:00.125 Host bytes with metadata written: 368025600 00:08:00.125 Media bytes with metadata written: 368066560 00:08:00.125 Media bytes erased: 0 00:08:00.125 00:08:00.125 FDP events log page 00:08:00.125 =================== 00:08:00.125 Number of FDP events: 0 00:08:00.125 00:08:00.125 NVM Specific Namespace Data 00:08:00.125 =========================== 00:08:00.125 Logical Block Storage Tag Mask: 0 00:08:00.125 Protection Information Capabilities: 00:08:00.125 16b Guard Protection Information Storage Tag Support: No 00:08:00.125 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.125 Storage Tag Check Read Support: No 00:08:00.125 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.125 ===================================================== 00:08:00.125 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.125 ===================================================== 00:08:00.125 Controller Capabilities/Features 00:08:00.125 ================================ 00:08:00.125 Vendor ID: 1b36 00:08:00.125 Subsystem Vendor ID: 1af4 00:08:00.125 Serial Number: 12340 00:08:00.125 Model Number: QEMU NVMe Ctrl 00:08:00.125 Firmware Version: 8.0.0 00:08:00.125 Recommended Arb Burst: 6 00:08:00.125 IEEE OUI Identifier: 00 54 52 00:08:00.125 Multi-path I/O 00:08:00.125 May have multiple subsystem ports: No 00:08:00.125 May have multiple controllers: No 00:08:00.125 Associated with SR-IOV VF: No 00:08:00.125 Max Data Transfer Size: 524288 00:08:00.125 Max Number of Namespaces: 256 00:08:00.125 Max Number of I/O Queues: 64 00:08:00.125 NVMe Specification Version (VS): 1.4 00:08:00.125 NVMe Specification Version (Identify): 1.4 00:08:00.125 Maximum Queue Entries: 2048 00:08:00.125 Contiguous Queues Required: Yes 00:08:00.125 Arbitration Mechanisms Supported 00:08:00.125 Weighted Round Robin: Not Supported 00:08:00.125 Vendor Specific: Not Supported 00:08:00.125 Reset Timeout: 7500 ms 00:08:00.125 Doorbell Stride: 4 bytes 00:08:00.125 NVM Subsystem Reset: Not Supported 00:08:00.125 Command Sets Supported 00:08:00.125 NVM Command Set: Supported 00:08:00.125 Boot Partition: Not Supported 00:08:00.125 Memory Page Size Minimum: 4096 bytes 00:08:00.125 Memory Page Size Maximum: 65536 bytes 00:08:00.125 Persistent Memory Region: Not Supported 00:08:00.125 Optional Asynchronous Events Supported 00:08:00.125 Namespace Attribute Notices: Supported 00:08:00.125 Firmware Activation Notices: Not Supported 00:08:00.125 ANA Change Notices: Not Supported 00:08:00.125 PLE Aggregate Log Change Notices: Not Supported 00:08:00.125 LBA Status Info Alert Notices: Not Supported 00:08:00.125 EGE Aggregate Log Change Notices: Not Supported 00:08:00.125 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.125 Zone Descriptor Change Notices: Not Supported 00:08:00.126 Discovery Log Change Notices: Not Supported 00:08:00.126 Controller Attributes 00:08:00.126 128-bit Host Identifier: Not Supported 00:08:00.126 Non-Operational Permissive Mode: Not Supported 00:08:00.126 NVM Sets: Not Supported 00:08:00.126 Read Recovery Levels: Not Supported 00:08:00.126 Endurance Groups: Not Supported 00:08:00.126 Predictable Latency Mode: Not Supported 00:08:00.126 Traffic Based Keep ALive: Not Supported 00:08:00.126 Namespace Granularity: Not Supported 00:08:00.126 SQ Associations: Not Supported 00:08:00.126 UUID List: Not Supported 00:08:00.126 Multi-Domain Subsystem: Not Supported 00:08:00.126 Fixed Capacity Management: Not Supported 00:08:00.126 Variable Capacity Management: Not Supported 00:08:00.126 Delete Endurance Group: Not Supported 00:08:00.126 Delete NVM Set: Not Supported 00:08:00.126 Extended LBA Formats Supported: Supported 00:08:00.126 Flexible Data Placement Supported: Not Supported 00:08:00.126 00:08:00.126 Controller Memory Buffer Support 00:08:00.126 ================================ 00:08:00.126 Supported: No 00:08:00.126 00:08:00.126 Persistent Memory Region Support 00:08:00.126 ================================ 00:08:00.126 Supported: No 00:08:00.126 00:08:00.126 Admin Command Set Attributes 00:08:00.126 ============================ 00:08:00.126 Security Send/Receive: Not Supported 00:08:00.126 Format NVM: Supported 00:08:00.126 Firmware Activate/Download: Not Supported 00:08:00.126 Namespace Management: Supported 00:08:00.126 Device Self-Test: Not Supported 00:08:00.126 Directives: Supported 00:08:00.126 NVMe-MI: Not Supported 00:08:00.126 Virtualization Management: Not Supported 00:08:00.126 Doorbell Buffer Config: Supported 00:08:00.126 Get LBA Status Capability: Not Supported 00:08:00.126 Command & Feature Lockdown Capability: Not Supported 00:08:00.126 Abort Command Limit: 4 00:08:00.126 Async Event Request Limit: 4 00:08:00.126 Number of Firmware Slots: N/A 00:08:00.126 Firmware Slot 1 Read-Only: N/A 00:08:00.126 Firmware Activation Without Reset: N/A 00:08:00.126 Multiple Update Detection Support: N/A 00:08:00.126 Firmware Update Granularity: No Information Provided 00:08:00.126 Per-Namespace SMART Log: Yes 00:08:00.126 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.126 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:00.126 Command Effects Log Page: Supported 00:08:00.126 Get Log Page Extended Data: Supported 00:08:00.126 Telemetry Log Pages: Not Supported 00:08:00.126 Persistent Event Log Pages: Not Supported 00:08:00.126 Supported Log Pages Log Page: May Support 00:08:00.126 Commands Supported & Effects Log Page: Not Supported 00:08:00.126 Feature Identifiers & Effects Log Page:May Support 00:08:00.126 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.126 Data Area 4 for Telemetry Log: Not Supported 00:08:00.126 Error Log Page Entries Supported: 1 00:08:00.126 Keep Alive: Not Supported 00:08:00.126 00:08:00.126 NVM Command Set Attributes 00:08:00.126 ========================== 00:08:00.126 Submission Queue Entry Size 00:08:00.126 Max: 64 00:08:00.126 Min: 64 00:08:00.126 Completion Queue Entry Size 00:08:00.126 Max: 16 00:08:00.126 Min: 16 00:08:00.126 Number of Namespaces: 256 00:08:00.126 Compare Command: Supported 00:08:00.126 Write Uncorrectable Command: Not Supported 00:08:00.126 Dataset Management Command: Supported 00:08:00.126 Write Zeroes Command: Supported 00:08:00.126 Set Features Save Field: Supported 00:08:00.126 Reservations: Not Supported 00:08:00.126 Timestamp: Supported 00:08:00.126 Copy: Supported 00:08:00.126 Volatile Write Cache: Present 00:08:00.126 Atomic Write Unit (Normal): 1 00:08:00.126 Atomic Write Unit (PFail): 1 00:08:00.126 Atomic Compare & Write Unit: 1 00:08:00.126 Fused Compare & Write: Not Supported 00:08:00.126 Scatter-Gather List 00:08:00.126 SGL Command Set: Supported 00:08:00.126 SGL Keyed: Not Supported 00:08:00.126 SGL Bit Bucket Descriptor: Not Supported 00:08:00.126 SGL Metadata Pointer: Not Supported 00:08:00.126 Oversized SGL: Not Supported 00:08:00.126 SGL Metadata Address: Not Supported 00:08:00.126 SGL Offset: Not Supported 00:08:00.126 Transport SGL Data Block: Not Supported 00:08:00.126 Replay Protected Memory Block: Not Supported 00:08:00.126 00:08:00.126 Firmware Slot Information 00:08:00.126 ========================= 00:08:00.126 Active slot: 1 00:08:00.126 Slot 1 Firmware Revision: 1.0 00:08:00.126 00:08:00.126 00:08:00.126 Commands Supported and Effects 00:08:00.126 ============================== 00:08:00.126 Admin Commands 00:08:00.126 -------------- 00:08:00.126 Delete I/O Submission Queue (00h): Supported 00:08:00.126 Create I/O Submission Queue (01h): Supported 00:08:00.126 Get Log Page (02h): Supported 00:08:00.126 Delete I/O Completion Queue (04h): Supported 00:08:00.126 Create I/O Completion Queue (05h): Supported 00:08:00.126 Identify (06h): Supported 00:08:00.126 Abort (08h): Supported 00:08:00.126 Set Features (09h): Supported 00:08:00.126 Get Features (0Ah): Supported 00:08:00.126 Asynchronous Event Request (0Ch): Supported 00:08:00.126 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.126 Directive Send (19h): Supported 00:08:00.126 Directive Receive (1Ah): Supported 00:08:00.126 Virtualization Management (1Ch): Supported 00:08:00.126 Doorbell Buffer Config (7Ch): Supported 00:08:00.126 Format NVM (80h): Supported LBA-Change 00:08:00.126 I/O Commands 00:08:00.126 ------------ 00:08:00.126 Flush (00h): Supported LBA-Change 00:08:00.126 Write (01h): Supported LBA-Change 00:08:00.126 Read (02h): Supported 00:08:00.126 Compare (05h): Supported 00:08:00.126 Write Zeroes (08h): Supported LBA-Change 00:08:00.126 Dataset Management (09h): Supported LBA-Change 00:08:00.126 Unknown (0Ch): Supported 00:08:00.126 Unknown (12h): Supported 00:08:00.126 Copy (19h): Supported LBA-Change 00:08:00.126 Unknown (1Dh): Supported LBA-Change 00:08:00.126 00:08:00.126 Error Log 00:08:00.126 ========= 00:08:00.126 00:08:00.126 Arbitration 00:08:00.126 =========== 00:08:00.126 Arbitration Burst: no limit 00:08:00.126 00:08:00.126 Power Management 00:08:00.126 ================ 00:08:00.126 Number of Power States: 1 00:08:00.126 Current Power State: Power State #0 00:08:00.126 Power State #0: 00:08:00.126 Max Power: 25.00 W 00:08:00.126 Non-Operational State: Operational 00:08:00.126 Entry Latency: 16 microseconds 00:08:00.126 Exit Latency: 4 microseconds 00:08:00.126 Relative Read Throughput: 0 00:08:00.126 Relative Read Latency: 0 00:08:00.126 Relative Write Throughput: 0 00:08:00.126 Relative Write Latency: 0 00:08:00.126 Idle Power: Not Reported 00:08:00.126 Active Power: Not Reported 00:08:00.126 Non-Operational Permissive Mode: Not Supported 00:08:00.126 00:08:00.126 Health Information 00:08:00.126 ================== 00:08:00.126 Critical Warnings: 00:08:00.126 Available Spare Space: OK 00:08:00.126 Temperature: OK 00:08:00.126 Device Reliability: OK 00:08:00.126 Read Only: No 00:08:00.126 Volatile Memory Backup: OK 00:08:00.126 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.126 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.126 Available Spare: 0% 00:08:00.126 Available Spare Threshold: 0% 00:08:00.126 Life Percentage Used: 0% 00:08:00.126 Data Units Read: 629 00:08:00.126 Data Units Written: 557 00:08:00.126 Host Read Commands: 33072 00:08:00.126 Host Write Commands: 32858 00:08:00.126 Controller Busy Time: 0 minutes 00:08:00.126 Power Cycles: 0 00:08:00.126 Power On Hours: 0 hours 00:08:00.126 Unsafe Shutdowns: 0 00:08:00.126 Unrecoverable Media Errors: 0 00:08:00.126 Lifetime Error Log Entries: 0 00:08:00.126 Warning Temperature Time: 0 minutes 00:08:00.126 Critical Temperature Time: 0 minutes 00:08:00.126 00:08:00.126 Number of Queues 00:08:00.126 ================ 00:08:00.126 Number of I/O Submission Queues: 64 00:08:00.126 Number of I/O Completion Queues: 64 00:08:00.126 00:08:00.126 ZNS Specific Controller Data 00:08:00.126 ============================ 00:08:00.126 Zone Append Size Limit: 0 00:08:00.126 00:08:00.126 00:08:00.126 Active Namespaces 00:08:00.126 ================= 00:08:00.126 Namespace ID:1 00:08:00.126 Error Recovery Timeout: Unlimited 00:08:00.126 Command Set Identifier: NVM (00h) 00:08:00.126 Deallocate: Supported 00:08:00.126 Deallocated/Unwritten Error: Supported 00:08:00.126 Deallocated Read Value: All 0x00 00:08:00.126 Deallocate in Write Zeroes: Not Supported 00:08:00.126 Deallocated Guard Field: 0xFFFF 00:08:00.126 Flush: Supported 00:08:00.126 Reservation: Not Supported 00:08:00.127 Metadata Transferred as: Separate Metadata Buffer 00:08:00.127 Namespace Sharing Capabilities: Private 00:08:00.127 Size (in LBAs): 1548666 (5GiB) 00:08:00.127 Capacity (in LBAs): 1548666 (5GiB) 00:08:00.127 Utilization (in LBAs): 1548666 (5GiB) 00:08:00.127 Thin Provisioning: Not Supported 00:08:00.127 Per-NS Atomic Units: No 00:08:00.127 Maximum Single Source Range Length: 128 00:08:00.127 Maximum Copy Length: 128 00:08:00.127 Maximum Source Range Count: 128 00:08:00.127 NGUID/EUI64 Never Reused: No 00:08:00.127 Namespace Write Protected: No 00:08:00.127 Number of LBA Formats: 8 00:08:00.127 Current LBA Format: LBA Format #07 00:08:00.127 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.127 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.127 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.127 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.127 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.127 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.127 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.127 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.127 00:08:00.127 NVM Specific Namespace Data 00:08:00.127 =========================== 00:08:00.127 Logical Block Storage Tag Mask: 0 00:08:00.127 Protection Information Capabilities: 00:08:00.127 16b Guard Protection Information Storage Tag Support: No 00:08:00.127 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.127 Storage Tag Check Read Support: No 00:08:00.127 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.127 ===================================================== 00:08:00.127 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.127 ===================================================== 00:08:00.127 Controller Capabilities/Features 00:08:00.127 ================================ 00:08:00.127 Vendor ID: 1b36 00:08:00.127 Subsystem Vendor ID: 1af4 00:08:00.127 Serial Number: 12341 00:08:00.127 Model Number: QEMU NVMe Ctrl 00:08:00.127 Firmware Version: 8.0.0 00:08:00.127 Recommended Arb Burst: 6 00:08:00.127 IEEE OUI Identifier: 00 54 52 00:08:00.127 Multi-path I/O 00:08:00.127 May have multiple subsystem ports: No 00:08:00.127 May have multiple controllers: No 00:08:00.127 Associated with SR-IOV VF: No 00:08:00.127 Max Data Transfer Size: 524288 00:08:00.127 Max Number of Namespaces: 256 00:08:00.127 Max Number of I/O Queues: 64 00:08:00.127 NVMe Specification Version (VS): 1.4 00:08:00.127 NVMe Specification Version (Identify): 1.4 00:08:00.127 Maximum Queue Entries: 2048 00:08:00.127 Contiguous Queues Required: Yes 00:08:00.127 Arbitration Mechanisms Supported 00:08:00.127 Weighted Round Robin: Not Supported 00:08:00.127 Vendor Specific: Not Supported 00:08:00.127 Reset Timeout: 7500 ms 00:08:00.127 Doorbell Stride: 4 bytes 00:08:00.127 NVM Subsystem Reset: Not Supported 00:08:00.127 Command Sets Supported 00:08:00.127 NVM Command Set: Supported 00:08:00.127 Boot Partition: Not Supported 00:08:00.127 Memory Page Size Minimum: 4096 bytes 00:08:00.127 Memory Page Size Maximum: 65536 bytes 00:08:00.127 Persistent Memory Region: Not Supported 00:08:00.127 Optional Asynchronous Events Supported 00:08:00.127 Namespace Attribute Notices: Supported 00:08:00.127 Firmware Activation Notices: Not Supported 00:08:00.127 ANA Change Notices: Not Supported 00:08:00.127 PLE Aggregate Log Change Notices: Not Supported 00:08:00.127 LBA Status Info Alert Notices: Not Supported 00:08:00.127 EGE Aggregate Log Change Notices: Not Supported 00:08:00.127 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.127 Zone Descriptor Change Notices: Not Supported 00:08:00.127 Discovery Log Change Notices: Not Supported 00:08:00.127 Controller Attributes 00:08:00.127 128-bit Host Identifier: Not Supported 00:08:00.127 Non-Operational Permissive Mode: Not Supported 00:08:00.127 NVM Sets: Not Supported 00:08:00.127 Read Recovery Levels: Not Supported 00:08:00.127 Endurance Groups: Not Supported 00:08:00.127 Predictable Latency Mode: Not Supported 00:08:00.127 Traffic Based Keep ALive: Not Supported 00:08:00.127 Namespace Granularity: Not Supported 00:08:00.127 SQ Associations: Not Supported 00:08:00.127 UUID List: Not Supported 00:08:00.127 Multi-Domain Subsystem: Not Supported 00:08:00.127 Fixed Capacity Management: Not Supported 00:08:00.127 Variable Capacity Management: Not Supported 00:08:00.127 Delete Endurance Group: Not Supported 00:08:00.127 Delete NVM Set: Not Supported 00:08:00.127 Extended LBA Formats Supported: Supported 00:08:00.127 Flexible Data Placement Supported: Not Supported 00:08:00.127 00:08:00.127 Controller Memory Buffer Support 00:08:00.127 ================================ 00:08:00.127 Supported: No 00:08:00.127 00:08:00.127 Persistent Memory Region Support 00:08:00.127 ================================ 00:08:00.127 Supported: No 00:08:00.127 00:08:00.127 Admin Command Set Attributes 00:08:00.127 ============================ 00:08:00.127 Security Send/Receive: Not Supported 00:08:00.127 Format NVM: Supported 00:08:00.127 Firmware Activate/Download: Not Supported 00:08:00.127 Namespace Management: Supported 00:08:00.127 Device Self-Test: Not Supported 00:08:00.127 Directives: Supported 00:08:00.127 NVMe-MI: Not Supported 00:08:00.127 Virtualization Management: Not Supported 00:08:00.127 Doorbell Buffer Config: Supported 00:08:00.127 Get LBA Status Capability: Not Supported 00:08:00.127 Command & Feature Lockdown Capability: Not Supported 00:08:00.127 Abort Command Limit: 4 00:08:00.127 Async Event Request Limit: 4 00:08:00.127 Number of Firmware Slots: N/A 00:08:00.127 Firmware Slot 1 Read-Only: N/A 00:08:00.127 Firmware Activation Without Reset: N/A 00:08:00.127 Multiple Update Detection Support: N/A 00:08:00.127 Firmware Update Granularity: No Information Provided 00:08:00.127 Per-Namespace SMART Log: Yes 00:08:00.127 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.127 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:00.127 Command Effects Log Page: Supported 00:08:00.127 Get Log Page Extended Data: Supported 00:08:00.127 Telemetry Log Pages: Not Supported 00:08:00.127 Persistent Event Log Pages: Not Supported 00:08:00.127 Supported Log Pages Log Page: May Support 00:08:00.127 Commands Supported & Effects Log Page: Not Supported 00:08:00.127 Feature Identifiers & Effects Log Page:May Support 00:08:00.127 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.127 Data Area 4 for Telemetry Log: Not Supported 00:08:00.127 Error Log Page Entries Supported: 1 00:08:00.127 Keep Alive: Not Supported 00:08:00.127 00:08:00.127 NVM Command Set Attributes 00:08:00.127 ========================== 00:08:00.127 Submission Queue Entry Size 00:08:00.127 Max: 64 00:08:00.127 Min: 64 00:08:00.127 Completion Queue Entry Size 00:08:00.127 Max: 16 00:08:00.127 Min: 16 00:08:00.127 Number of Namespaces: 256 00:08:00.127 Compare Command: Supported 00:08:00.127 Write Uncorrectable Command: Not Supported 00:08:00.127 Dataset Management Command: Supported 00:08:00.127 Write Zeroes Command: Supported 00:08:00.127 Set Features Save Field: Supported 00:08:00.127 Reservations: Not Supported 00:08:00.127 Timestamp: Supported 00:08:00.127 Copy: Supported 00:08:00.127 Volatile Write Cache: Present 00:08:00.127 Atomic Write Unit (Normal): 1 00:08:00.127 Atomic Write Unit (PFail): 1 00:08:00.127 Atomic Compare & Write Unit: 1 00:08:00.127 Fused Compare & Write: Not Supported 00:08:00.127 Scatter-Gather List 00:08:00.127 SGL Command Set: Supported 00:08:00.127 SGL Keyed: Not Supported 00:08:00.127 SGL Bit Bucket Descriptor: Not Supported 00:08:00.128 SGL Metadata Pointer: Not Supported 00:08:00.128 Oversized SGL: Not Supported 00:08:00.128 SGL Metadata Address: Not Supported 00:08:00.128 SGL Offset: Not Supported 00:08:00.128 Transport SGL Data Block: Not Supported 00:08:00.128 Replay Protected Memory Block: Not Supported 00:08:00.128 00:08:00.128 Firmware Slot Information 00:08:00.128 ========================= 00:08:00.128 Active slot: 1 00:08:00.128 Slot 1 Firmware Revision: 1.0 00:08:00.128 00:08:00.128 00:08:00.128 Commands Supported and Effects 00:08:00.128 ============================== 00:08:00.128 Admin Commands 00:08:00.128 -------------- 00:08:00.128 Delete I/O Submission Queue (00h): Supported 00:08:00.128 Create I/O Submission Queue (01h): Supported 00:08:00.128 Get Log Page (02h): Supported 00:08:00.128 Delete I/O Completion Queue (04h): Supported 00:08:00.128 Create I/O Completion Queue (05h): Supported 00:08:00.128 Identify (06h): Supported 00:08:00.128 Abort (08h): Supported 00:08:00.128 Set Features (09h): Supported 00:08:00.128 Get Features (0Ah): Supported 00:08:00.128 Asynchronous Event Request (0Ch): Supported 00:08:00.128 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.128 Directive Send (19h): Supported 00:08:00.128 Directive Receive (1Ah): Supported 00:08:00.128 Virtualization Management (1Ch): Supported 00:08:00.128 Doorbell Buffer Config (7Ch): Supported 00:08:00.128 Format NVM (80h): Supported LBA-Change 00:08:00.128 I/O Commands 00:08:00.128 ------------ 00:08:00.128 Flush (00h): Supported LBA-Change 00:08:00.128 Write (01h): Supported LBA-Change 00:08:00.128 Read (02h): Supported 00:08:00.128 Compare (05h): Supported 00:08:00.128 Write Zeroes (08h): Supported LBA-Change 00:08:00.128 Dataset Management (09h): Supported LBA-Change 00:08:00.128 Unknown (0Ch): Supported 00:08:00.128 Unknown (12h): Supported 00:08:00.128 Copy (19h): Supported LBA-Change 00:08:00.128 Unknown (1Dh): Supported LBA-Change 00:08:00.128 00:08:00.128 Error Log 00:08:00.128 ========= 00:08:00.128 00:08:00.128 Arbitration 00:08:00.128 =========== 00:08:00.128 Arbitration Burst: no limit 00:08:00.128 00:08:00.128 Power Management 00:08:00.128 ================ 00:08:00.128 Number of Power States: 1 00:08:00.128 Current Power State: Power State #0 00:08:00.128 Power State #0: 00:08:00.128 Max Power: 25.00 W 00:08:00.128 Non-Operational State: Operational 00:08:00.128 Entry Latency: 16 microseconds 00:08:00.128 Exit Latency: 4 microseconds 00:08:00.128 Relative Read Throughput: 0 00:08:00.128 Relative Read Latency: 0 00:08:00.128 Relative Write Throughput: 0 00:08:00.128 Relative Write Latency: 0 00:08:00.128 Idle Power: Not Reported 00:08:00.128 Active Power: Not Reported 00:08:00.128 Non-Operational Permissive Mode: Not Supported 00:08:00.128 00:08:00.128 Health Information 00:08:00.128 ================== 00:08:00.128 Critical Warnings: 00:08:00.128 Available Spare Space: OK 00:08:00.128 Temperature: OK 00:08:00.128 Device Reliability: OK 00:08:00.128 Read Only: No 00:08:00.128 Volatile Memory Backup: OK 00:08:00.128 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.128 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.128 Available Spare: 0% 00:08:00.128 Available Spare Threshold: 0% 00:08:00.128 Life Percentage Used: 0% 00:08:00.128 Data Units Read: 966 00:08:00.128 Data Units Written: 835 00:08:00.128 Host Read Commands: 49622 00:08:00.128 Host Write Commands: 48472 00:08:00.128 Controller Busy Time: 0 minutes 00:08:00.128 Power Cycles: 0 00:08:00.128 Power On Hours: 0 hours 00:08:00.128 Unsafe Shutdowns: 0 00:08:00.128 Unrecoverable Media Errors: 0 00:08:00.128 Lifetime Error Log Entries: 0 00:08:00.128 Warning Temperature Time: 0 minutes 00:08:00.128 Critical Temperature Time: 0 minutes 00:08:00.128 00:08:00.128 Number of Queues 00:08:00.128 ================ 00:08:00.128 Number of I/O Submission Queues: 64 00:08:00.128 Number of I/O Completion Queues: 64 00:08:00.128 00:08:00.128 ZNS Specific Controller Data 00:08:00.128 ============================ 00:08:00.128 Zone Append Size Limit: 0 00:08:00.128 00:08:00.128 00:08:00.128 Active Namespaces 00:08:00.128 ================= 00:08:00.128 Namespace ID:1 00:08:00.128 Error Recovery Timeout: Unlimited 00:08:00.128 Command Set Identifier: NVM (00h) 00:08:00.128 Deallocate: Supported 00:08:00.128 Deallocated/Unwritten Error: Supported 00:08:00.128 Deallocated Read Value: All 0x00 00:08:00.128 Deallocate in Write Zeroes: Not Supported 00:08:00.128 Deallocated Guard Field: 0xFFFF 00:08:00.128 Flush: Supported 00:08:00.128 Reservation: Not Supported 00:08:00.128 Namespace Sharing Capabilities: Private 00:08:00.128 Size (in LBAs): 1310720 (5GiB) 00:08:00.128 Capacity (in LBAs): 1310720 (5GiB) 00:08:00.128 Utilization (in LBAs): 1310720 (5GiB) 00:08:00.128 Thin Provisioning: Not Supported 00:08:00.128 Per-NS Atomic Units: No 00:08:00.128 Maximum Single Source Range Length: 128 00:08:00.128 Maximum Copy Length: 128 00:08:00.128 Maximum Source Range Count: 128 00:08:00.128 NGUID/EUI64 Never Reused: No 00:08:00.128 Namespace Write Protected: No 00:08:00.128 Number of LBA Formats: 8 00:08:00.128 Current LBA Format: LBA Format #04 00:08:00.128 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.128 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.128 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.128 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.128 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.128 LBA Format[2024-11-20 09:20:25.470191] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62926 terminated unexpected 00:08:00.128 [2024-11-20 09:20:25.471196] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62926 terminated unexpected 00:08:00.128 #05: Data Size: 4096 Metadata Size: 8 00:08:00.128 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.128 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.128 00:08:00.128 NVM Specific Namespace Data 00:08:00.128 =========================== 00:08:00.128 Logical Block Storage Tag Mask: 0 00:08:00.128 Protection Information Capabilities: 00:08:00.128 16b Guard Protection Information Storage Tag Support: No 00:08:00.128 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.128 Storage Tag Check Read Support: No 00:08:00.128 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.128 ===================================================== 00:08:00.128 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.128 ===================================================== 00:08:00.128 Controller Capabilities/Features 00:08:00.128 ================================ 00:08:00.128 Vendor ID: 1b36 00:08:00.128 Subsystem Vendor ID: 1af4 00:08:00.128 Serial Number: 12342 00:08:00.128 Model Number: QEMU NVMe Ctrl 00:08:00.128 Firmware Version: 8.0.0 00:08:00.128 Recommended Arb Burst: 6 00:08:00.128 IEEE OUI Identifier: 00 54 52 00:08:00.128 Multi-path I/O 00:08:00.128 May have multiple subsystem ports: No 00:08:00.128 May have multiple controllers: No 00:08:00.128 Associated with SR-IOV VF: No 00:08:00.128 Max Data Transfer Size: 524288 00:08:00.128 Max Number of Namespaces: 256 00:08:00.128 Max Number of I/O Queues: 64 00:08:00.128 NVMe Specification Version (VS): 1.4 00:08:00.128 NVMe Specification Version (Identify): 1.4 00:08:00.128 Maximum Queue Entries: 2048 00:08:00.128 Contiguous Queues Required: Yes 00:08:00.128 Arbitration Mechanisms Supported 00:08:00.128 Weighted Round Robin: Not Supported 00:08:00.128 Vendor Specific: Not Supported 00:08:00.128 Reset Timeout: 7500 ms 00:08:00.128 Doorbell Stride: 4 bytes 00:08:00.129 NVM Subsystem Reset: Not Supported 00:08:00.129 Command Sets Supported 00:08:00.129 NVM Command Set: Supported 00:08:00.129 Boot Partition: Not Supported 00:08:00.129 Memory Page Size Minimum: 4096 bytes 00:08:00.129 Memory Page Size Maximum: 65536 bytes 00:08:00.129 Persistent Memory Region: Not Supported 00:08:00.129 Optional Asynchronous Events Supported 00:08:00.129 Namespace Attribute Notices: Supported 00:08:00.129 Firmware Activation Notices: Not Supported 00:08:00.129 ANA Change Notices: Not Supported 00:08:00.129 PLE Aggregate Log Change Notices: Not Supported 00:08:00.129 LBA Status Info Alert Notices: Not Supported 00:08:00.129 EGE Aggregate Log Change Notices: Not Supported 00:08:00.129 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.129 Zone Descriptor Change Notices: Not Supported 00:08:00.129 Discovery Log Change Notices: Not Supported 00:08:00.129 Controller Attributes 00:08:00.129 128-bit Host Identifier: Not Supported 00:08:00.129 Non-Operational Permissive Mode: Not Supported 00:08:00.129 NVM Sets: Not Supported 00:08:00.129 Read Recovery Levels: Not Supported 00:08:00.129 Endurance Groups: Not Supported 00:08:00.129 Predictable Latency Mode: Not Supported 00:08:00.129 Traffic Based Keep ALive: Not Supported 00:08:00.129 Namespace Granularity: Not Supported 00:08:00.129 SQ Associations: Not Supported 00:08:00.129 UUID List: Not Supported 00:08:00.129 Multi-Domain Subsystem: Not Supported 00:08:00.129 Fixed Capacity Management: Not Supported 00:08:00.129 Variable Capacity Management: Not Supported 00:08:00.129 Delete Endurance Group: Not Supported 00:08:00.129 Delete NVM Set: Not Supported 00:08:00.129 Extended LBA Formats Supported: Supported 00:08:00.129 Flexible Data Placement Supported: Not Supported 00:08:00.129 00:08:00.129 Controller Memory Buffer Support 00:08:00.129 ================================ 00:08:00.129 Supported: No 00:08:00.129 00:08:00.129 Persistent Memory Region Support 00:08:00.129 ================================ 00:08:00.129 Supported: No 00:08:00.129 00:08:00.129 Admin Command Set Attributes 00:08:00.129 ============================ 00:08:00.129 Security Send/Receive: Not Supported 00:08:00.129 Format NVM: Supported 00:08:00.129 Firmware Activate/Download: Not Supported 00:08:00.129 Namespace Management: Supported 00:08:00.129 Device Self-Test: Not Supported 00:08:00.129 Directives: Supported 00:08:00.129 NVMe-MI: Not Supported 00:08:00.129 Virtualization Management: Not Supported 00:08:00.129 Doorbell Buffer Config: Supported 00:08:00.129 Get LBA Status Capability: Not Supported 00:08:00.129 Command & Feature Lockdown Capability: Not Supported 00:08:00.129 Abort Command Limit: 4 00:08:00.129 Async Event Request Limit: 4 00:08:00.129 Number of Firmware Slots: N/A 00:08:00.129 Firmware Slot 1 Read-Only: N/A 00:08:00.129 Firmware Activation Without Reset: N/A 00:08:00.129 Multiple Update Detection Support: N/A 00:08:00.129 Firmware Update Granularity: No Information Provided 00:08:00.129 Per-Namespace SMART Log: Yes 00:08:00.129 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.129 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:00.129 Command Effects Log Page: Supported 00:08:00.129 Get Log Page Extended Data: Supported 00:08:00.129 Telemetry Log Pages: Not Supported 00:08:00.129 Persistent Event Log Pages: Not Supported 00:08:00.129 Supported Log Pages Log Page: May Support 00:08:00.129 Commands Supported & Effects Log Page: Not Supported 00:08:00.129 Feature Identifiers & Effects Log Page:May Support 00:08:00.129 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.129 Data Area 4 for Telemetry Log: Not Supported 00:08:00.129 Error Log Page Entries Supported: 1 00:08:00.129 Keep Alive: Not Supported 00:08:00.129 00:08:00.129 NVM Command Set Attributes 00:08:00.129 ========================== 00:08:00.129 Submission Queue Entry Size 00:08:00.129 Max: 64 00:08:00.129 Min: 64 00:08:00.129 Completion Queue Entry Size 00:08:00.129 Max: 16 00:08:00.129 Min: 16 00:08:00.129 Number of Namespaces: 256 00:08:00.129 Compare Command: Supported 00:08:00.129 Write Uncorrectable Command: Not Supported 00:08:00.129 Dataset Management Command: Supported 00:08:00.129 Write Zeroes Command: Supported 00:08:00.129 Set Features Save Field: Supported 00:08:00.129 Reservations: Not Supported 00:08:00.129 Timestamp: Supported 00:08:00.129 Copy: Supported 00:08:00.129 Volatile Write Cache: Present 00:08:00.129 Atomic Write Unit (Normal): 1 00:08:00.129 Atomic Write Unit (PFail): 1 00:08:00.129 Atomic Compare & Write Unit: 1 00:08:00.129 Fused Compare & Write: Not Supported 00:08:00.129 Scatter-Gather List 00:08:00.129 SGL Command Set: Supported 00:08:00.129 SGL Keyed: Not Supported 00:08:00.129 SGL Bit Bucket Descriptor: Not Supported 00:08:00.129 SGL Metadata Pointer: Not Supported 00:08:00.129 Oversized SGL: Not Supported 00:08:00.129 SGL Metadata Address: Not Supported 00:08:00.129 SGL Offset: Not Supported 00:08:00.129 Transport SGL Data Block: Not Supported 00:08:00.129 Replay Protected Memory Block: Not Supported 00:08:00.129 00:08:00.129 Firmware Slot Information 00:08:00.129 ========================= 00:08:00.129 Active slot: 1 00:08:00.129 Slot 1 Firmware Revision: 1.0 00:08:00.129 00:08:00.129 00:08:00.129 Commands Supported and Effects 00:08:00.129 ============================== 00:08:00.129 Admin Commands 00:08:00.129 -------------- 00:08:00.129 Delete I/O Submission Queue (00h): Supported 00:08:00.129 Create I/O Submission Queue (01h): Supported 00:08:00.129 Get Log Page (02h): Supported 00:08:00.129 Delete I/O Completion Queue (04h): Supported 00:08:00.129 Create I/O Completion Queue (05h): Supported 00:08:00.129 Identify (06h): Supported 00:08:00.129 Abort (08h): Supported 00:08:00.129 Set Features (09h): Supported 00:08:00.129 Get Features (0Ah): Supported 00:08:00.129 Asynchronous Event Request (0Ch): Supported 00:08:00.129 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.129 Directive Send (19h): Supported 00:08:00.129 Directive Receive (1Ah): Supported 00:08:00.129 Virtualization Management (1Ch): Supported 00:08:00.129 Doorbell Buffer Config (7Ch): Supported 00:08:00.129 Format NVM (80h): Supported LBA-Change 00:08:00.129 I/O Commands 00:08:00.129 ------------ 00:08:00.129 Flush (00h): Supported LBA-Change 00:08:00.129 Write (01h): Supported LBA-Change 00:08:00.129 Read (02h): Supported 00:08:00.129 Compare (05h): Supported 00:08:00.129 Write Zeroes (08h): Supported LBA-Change 00:08:00.129 Dataset Management (09h): Supported LBA-Change 00:08:00.129 Unknown (0Ch): Supported 00:08:00.129 Unknown (12h): Supported 00:08:00.129 Copy (19h): Supported LBA-Change 00:08:00.129 Unknown (1Dh): Supported LBA-Change 00:08:00.129 00:08:00.129 Error Log 00:08:00.129 ========= 00:08:00.129 00:08:00.129 Arbitration 00:08:00.129 =========== 00:08:00.129 Arbitration Burst: no limit 00:08:00.129 00:08:00.129 Power Management 00:08:00.129 ================ 00:08:00.129 Number of Power States: 1 00:08:00.129 Current Power State: Power State #0 00:08:00.129 Power State #0: 00:08:00.129 Max Power: 25.00 W 00:08:00.129 Non-Operational State: Operational 00:08:00.129 Entry Latency: 16 microseconds 00:08:00.129 Exit Latency: 4 microseconds 00:08:00.129 Relative Read Throughput: 0 00:08:00.129 Relative Read Latency: 0 00:08:00.129 Relative Write Throughput: 0 00:08:00.129 Relative Write Latency: 0 00:08:00.129 Idle Power: Not Reported 00:08:00.129 Active Power: Not Reported 00:08:00.129 Non-Operational Permissive Mode: Not Supported 00:08:00.129 00:08:00.129 Health Information 00:08:00.129 ================== 00:08:00.129 Critical Warnings: 00:08:00.129 Available Spare Space: OK 00:08:00.129 Temperature: OK 00:08:00.130 Device Reliability: OK 00:08:00.130 Read Only: No 00:08:00.130 Volatile Memory Backup: OK 00:08:00.130 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.130 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.130 Available Spare: 0% 00:08:00.130 Available Spare Threshold: 0% 00:08:00.130 Life Percentage Used: 0% 00:08:00.130 Data Units Read: 1970 00:08:00.130 Data Units Written: 1757 00:08:00.130 Host Read Commands: 101067 00:08:00.130 Host Write Commands: 99336 00:08:00.130 Controller Busy Time: 0 minutes 00:08:00.130 Power Cycles: 0 00:08:00.130 Power On Hours: 0 hours 00:08:00.130 Unsafe Shutdowns: 0 00:08:00.130 Unrecoverable Media Errors: 0 00:08:00.130 Lifetime Error Log Entries: 0 00:08:00.130 Warning Temperature Time: 0 minutes 00:08:00.130 Critical Temperature Time: 0 minutes 00:08:00.130 00:08:00.130 Number of Queues 00:08:00.130 ================ 00:08:00.130 Number of I/O Submission Queues: 64 00:08:00.130 Number of I/O Completion Queues: 64 00:08:00.130 00:08:00.130 ZNS Specific Controller Data 00:08:00.130 ============================ 00:08:00.130 Zone Append Size Limit: 0 00:08:00.130 00:08:00.130 00:08:00.130 Active Namespaces 00:08:00.130 ================= 00:08:00.130 Namespace ID:1 00:08:00.130 Error Recovery Timeout: Unlimited 00:08:00.130 Command Set Identifier: NVM (00h) 00:08:00.130 Deallocate: Supported 00:08:00.130 Deallocated/Unwritten Error: Supported 00:08:00.130 Deallocated Read Value: All 0x00 00:08:00.130 Deallocate in Write Zeroes: Not Supported 00:08:00.130 Deallocated Guard Field: 0xFFFF 00:08:00.130 Flush: Supported 00:08:00.130 Reservation: Not Supported 00:08:00.130 Namespace Sharing Capabilities: Private 00:08:00.130 Size (in LBAs): 1048576 (4GiB) 00:08:00.130 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.130 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.130 Thin Provisioning: Not Supported 00:08:00.130 Per-NS Atomic Units: No 00:08:00.130 Maximum Single Source Range Length: 128 00:08:00.130 Maximum Copy Length: 128 00:08:00.130 Maximum Source Range Count: 128 00:08:00.130 NGUID/EUI64 Never Reused: No 00:08:00.130 Namespace Write Protected: No 00:08:00.130 Number of LBA Formats: 8 00:08:00.130 Current LBA Format: LBA Format #04 00:08:00.130 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.130 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.130 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.130 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.130 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.130 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.130 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.130 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.130 00:08:00.130 NVM Specific Namespace Data 00:08:00.130 =========================== 00:08:00.130 Logical Block Storage Tag Mask: 0 00:08:00.130 Protection Information Capabilities: 00:08:00.130 16b Guard Protection Information Storage Tag Support: No 00:08:00.130 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.130 Storage Tag Check Read Support: No 00:08:00.130 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Namespace ID:2 00:08:00.130 Error Recovery Timeout: Unlimited 00:08:00.130 Command Set Identifier: NVM (00h) 00:08:00.130 Deallocate: Supported 00:08:00.130 Deallocated/Unwritten Error: Supported 00:08:00.130 Deallocated Read Value: All 0x00 00:08:00.130 Deallocate in Write Zeroes: Not Supported 00:08:00.130 Deallocated Guard Field: 0xFFFF 00:08:00.130 Flush: Supported 00:08:00.130 Reservation: Not Supported 00:08:00.130 Namespace Sharing Capabilities: Private 00:08:00.130 Size (in LBAs): 1048576 (4GiB) 00:08:00.130 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.130 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.130 Thin Provisioning: Not Supported 00:08:00.130 Per-NS Atomic Units: No 00:08:00.130 Maximum Single Source Range Length: 128 00:08:00.130 Maximum Copy Length: 128 00:08:00.130 Maximum Source Range Count: 128 00:08:00.130 NGUID/EUI64 Never Reused: No 00:08:00.130 Namespace Write Protected: No 00:08:00.130 Number of LBA Formats: 8 00:08:00.130 Current LBA Format: LBA Format #04 00:08:00.130 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.130 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.130 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.130 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.130 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.130 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.130 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.130 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.130 00:08:00.130 NVM Specific Namespace Data 00:08:00.130 =========================== 00:08:00.130 Logical Block Storage Tag Mask: 0 00:08:00.130 Protection Information Capabilities: 00:08:00.130 16b Guard Protection Information Storage Tag Support: No 00:08:00.130 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.130 Storage Tag Check Read Support: No 00:08:00.130 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Namespace ID:3 00:08:00.130 Error Recovery Timeout: Unlimited 00:08:00.130 Command Set Identifier: NVM (00h) 00:08:00.130 Deallocate: Supported 00:08:00.130 Deallocated/Unwritten Error: Supported 00:08:00.130 Deallocated Read Value: All 0x00 00:08:00.130 Deallocate in Write Zeroes: Not Supported 00:08:00.130 Deallocated Guard Field: 0xFFFF 00:08:00.130 Flush: Supported 00:08:00.130 Reservation: Not Supported 00:08:00.130 Namespace Sharing Capabilities: Private 00:08:00.130 Size (in LBAs): 1048576 (4GiB) 00:08:00.130 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.130 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.130 Thin Provisioning: Not Supported 00:08:00.130 Per-NS Atomic Units: No 00:08:00.130 Maximum Single Source Range Length: 128 00:08:00.130 Maximum Copy Length: 128 00:08:00.130 Maximum Source Range Count: 128 00:08:00.130 NGUID/EUI64 Never Reused: No 00:08:00.130 Namespace Write Protected: No 00:08:00.130 Number of LBA Formats: 8 00:08:00.130 Current LBA Format: LBA Format #04 00:08:00.130 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.130 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.130 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.130 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.130 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.130 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.130 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.130 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.130 00:08:00.130 NVM Specific Namespace Data 00:08:00.130 =========================== 00:08:00.130 Logical Block Storage Tag Mask: 0 00:08:00.130 Protection Information Capabilities: 00:08:00.130 16b Guard Protection Information Storage Tag Support: No 00:08:00.130 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.130 Storage Tag Check Read Support: No 00:08:00.130 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.130 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.131 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.131 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.131 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:00.392 ===================================================== 00:08:00.392 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.392 ===================================================== 00:08:00.392 Controller Capabilities/Features 00:08:00.392 ================================ 00:08:00.392 Vendor ID: 1b36 00:08:00.392 Subsystem Vendor ID: 1af4 00:08:00.392 Serial Number: 12340 00:08:00.392 Model Number: QEMU NVMe Ctrl 00:08:00.392 Firmware Version: 8.0.0 00:08:00.392 Recommended Arb Burst: 6 00:08:00.392 IEEE OUI Identifier: 00 54 52 00:08:00.392 Multi-path I/O 00:08:00.392 May have multiple subsystem ports: No 00:08:00.392 May have multiple controllers: No 00:08:00.392 Associated with SR-IOV VF: No 00:08:00.392 Max Data Transfer Size: 524288 00:08:00.392 Max Number of Namespaces: 256 00:08:00.392 Max Number of I/O Queues: 64 00:08:00.392 NVMe Specification Version (VS): 1.4 00:08:00.392 NVMe Specification Version (Identify): 1.4 00:08:00.392 Maximum Queue Entries: 2048 00:08:00.392 Contiguous Queues Required: Yes 00:08:00.392 Arbitration Mechanisms Supported 00:08:00.392 Weighted Round Robin: Not Supported 00:08:00.392 Vendor Specific: Not Supported 00:08:00.392 Reset Timeout: 7500 ms 00:08:00.392 Doorbell Stride: 4 bytes 00:08:00.392 NVM Subsystem Reset: Not Supported 00:08:00.392 Command Sets Supported 00:08:00.392 NVM Command Set: Supported 00:08:00.392 Boot Partition: Not Supported 00:08:00.392 Memory Page Size Minimum: 4096 bytes 00:08:00.392 Memory Page Size Maximum: 65536 bytes 00:08:00.392 Persistent Memory Region: Not Supported 00:08:00.392 Optional Asynchronous Events Supported 00:08:00.392 Namespace Attribute Notices: Supported 00:08:00.392 Firmware Activation Notices: Not Supported 00:08:00.392 ANA Change Notices: Not Supported 00:08:00.392 PLE Aggregate Log Change Notices: Not Supported 00:08:00.392 LBA Status Info Alert Notices: Not Supported 00:08:00.392 EGE Aggregate Log Change Notices: Not Supported 00:08:00.392 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.392 Zone Descriptor Change Notices: Not Supported 00:08:00.392 Discovery Log Change Notices: Not Supported 00:08:00.392 Controller Attributes 00:08:00.392 128-bit Host Identifier: Not Supported 00:08:00.392 Non-Operational Permissive Mode: Not Supported 00:08:00.392 NVM Sets: Not Supported 00:08:00.392 Read Recovery Levels: Not Supported 00:08:00.392 Endurance Groups: Not Supported 00:08:00.392 Predictable Latency Mode: Not Supported 00:08:00.392 Traffic Based Keep ALive: Not Supported 00:08:00.392 Namespace Granularity: Not Supported 00:08:00.392 SQ Associations: Not Supported 00:08:00.392 UUID List: Not Supported 00:08:00.392 Multi-Domain Subsystem: Not Supported 00:08:00.392 Fixed Capacity Management: Not Supported 00:08:00.392 Variable Capacity Management: Not Supported 00:08:00.392 Delete Endurance Group: Not Supported 00:08:00.392 Delete NVM Set: Not Supported 00:08:00.392 Extended LBA Formats Supported: Supported 00:08:00.392 Flexible Data Placement Supported: Not Supported 00:08:00.392 00:08:00.392 Controller Memory Buffer Support 00:08:00.392 ================================ 00:08:00.392 Supported: No 00:08:00.392 00:08:00.392 Persistent Memory Region Support 00:08:00.392 ================================ 00:08:00.392 Supported: No 00:08:00.392 00:08:00.392 Admin Command Set Attributes 00:08:00.392 ============================ 00:08:00.392 Security Send/Receive: Not Supported 00:08:00.392 Format NVM: Supported 00:08:00.392 Firmware Activate/Download: Not Supported 00:08:00.392 Namespace Management: Supported 00:08:00.392 Device Self-Test: Not Supported 00:08:00.392 Directives: Supported 00:08:00.392 NVMe-MI: Not Supported 00:08:00.392 Virtualization Management: Not Supported 00:08:00.392 Doorbell Buffer Config: Supported 00:08:00.392 Get LBA Status Capability: Not Supported 00:08:00.392 Command & Feature Lockdown Capability: Not Supported 00:08:00.392 Abort Command Limit: 4 00:08:00.392 Async Event Request Limit: 4 00:08:00.392 Number of Firmware Slots: N/A 00:08:00.392 Firmware Slot 1 Read-Only: N/A 00:08:00.392 Firmware Activation Without Reset: N/A 00:08:00.392 Multiple Update Detection Support: N/A 00:08:00.392 Firmware Update Granularity: No Information Provided 00:08:00.392 Per-Namespace SMART Log: Yes 00:08:00.392 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.392 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:00.392 Command Effects Log Page: Supported 00:08:00.392 Get Log Page Extended Data: Supported 00:08:00.392 Telemetry Log Pages: Not Supported 00:08:00.392 Persistent Event Log Pages: Not Supported 00:08:00.392 Supported Log Pages Log Page: May Support 00:08:00.392 Commands Supported & Effects Log Page: Not Supported 00:08:00.392 Feature Identifiers & Effects Log Page:May Support 00:08:00.392 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.392 Data Area 4 for Telemetry Log: Not Supported 00:08:00.392 Error Log Page Entries Supported: 1 00:08:00.392 Keep Alive: Not Supported 00:08:00.392 00:08:00.392 NVM Command Set Attributes 00:08:00.392 ========================== 00:08:00.392 Submission Queue Entry Size 00:08:00.392 Max: 64 00:08:00.392 Min: 64 00:08:00.392 Completion Queue Entry Size 00:08:00.392 Max: 16 00:08:00.393 Min: 16 00:08:00.393 Number of Namespaces: 256 00:08:00.393 Compare Command: Supported 00:08:00.393 Write Uncorrectable Command: Not Supported 00:08:00.393 Dataset Management Command: Supported 00:08:00.393 Write Zeroes Command: Supported 00:08:00.393 Set Features Save Field: Supported 00:08:00.393 Reservations: Not Supported 00:08:00.393 Timestamp: Supported 00:08:00.393 Copy: Supported 00:08:00.393 Volatile Write Cache: Present 00:08:00.393 Atomic Write Unit (Normal): 1 00:08:00.393 Atomic Write Unit (PFail): 1 00:08:00.393 Atomic Compare & Write Unit: 1 00:08:00.393 Fused Compare & Write: Not Supported 00:08:00.393 Scatter-Gather List 00:08:00.393 SGL Command Set: Supported 00:08:00.393 SGL Keyed: Not Supported 00:08:00.393 SGL Bit Bucket Descriptor: Not Supported 00:08:00.393 SGL Metadata Pointer: Not Supported 00:08:00.393 Oversized SGL: Not Supported 00:08:00.393 SGL Metadata Address: Not Supported 00:08:00.393 SGL Offset: Not Supported 00:08:00.393 Transport SGL Data Block: Not Supported 00:08:00.393 Replay Protected Memory Block: Not Supported 00:08:00.393 00:08:00.393 Firmware Slot Information 00:08:00.393 ========================= 00:08:00.393 Active slot: 1 00:08:00.393 Slot 1 Firmware Revision: 1.0 00:08:00.393 00:08:00.393 00:08:00.393 Commands Supported and Effects 00:08:00.393 ============================== 00:08:00.393 Admin Commands 00:08:00.393 -------------- 00:08:00.393 Delete I/O Submission Queue (00h): Supported 00:08:00.393 Create I/O Submission Queue (01h): Supported 00:08:00.393 Get Log Page (02h): Supported 00:08:00.393 Delete I/O Completion Queue (04h): Supported 00:08:00.393 Create I/O Completion Queue (05h): Supported 00:08:00.393 Identify (06h): Supported 00:08:00.393 Abort (08h): Supported 00:08:00.393 Set Features (09h): Supported 00:08:00.393 Get Features (0Ah): Supported 00:08:00.393 Asynchronous Event Request (0Ch): Supported 00:08:00.393 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.393 Directive Send (19h): Supported 00:08:00.393 Directive Receive (1Ah): Supported 00:08:00.393 Virtualization Management (1Ch): Supported 00:08:00.393 Doorbell Buffer Config (7Ch): Supported 00:08:00.393 Format NVM (80h): Supported LBA-Change 00:08:00.393 I/O Commands 00:08:00.393 ------------ 00:08:00.393 Flush (00h): Supported LBA-Change 00:08:00.393 Write (01h): Supported LBA-Change 00:08:00.393 Read (02h): Supported 00:08:00.393 Compare (05h): Supported 00:08:00.393 Write Zeroes (08h): Supported LBA-Change 00:08:00.393 Dataset Management (09h): Supported LBA-Change 00:08:00.393 Unknown (0Ch): Supported 00:08:00.393 Unknown (12h): Supported 00:08:00.393 Copy (19h): Supported LBA-Change 00:08:00.393 Unknown (1Dh): Supported LBA-Change 00:08:00.393 00:08:00.393 Error Log 00:08:00.393 ========= 00:08:00.393 00:08:00.393 Arbitration 00:08:00.393 =========== 00:08:00.393 Arbitration Burst: no limit 00:08:00.393 00:08:00.393 Power Management 00:08:00.393 ================ 00:08:00.393 Number of Power States: 1 00:08:00.393 Current Power State: Power State #0 00:08:00.393 Power State #0: 00:08:00.393 Max Power: 25.00 W 00:08:00.393 Non-Operational State: Operational 00:08:00.393 Entry Latency: 16 microseconds 00:08:00.393 Exit Latency: 4 microseconds 00:08:00.393 Relative Read Throughput: 0 00:08:00.393 Relative Read Latency: 0 00:08:00.393 Relative Write Throughput: 0 00:08:00.393 Relative Write Latency: 0 00:08:00.393 Idle Power: Not Reported 00:08:00.393 Active Power: Not Reported 00:08:00.393 Non-Operational Permissive Mode: Not Supported 00:08:00.393 00:08:00.393 Health Information 00:08:00.393 ================== 00:08:00.393 Critical Warnings: 00:08:00.393 Available Spare Space: OK 00:08:00.393 Temperature: OK 00:08:00.393 Device Reliability: OK 00:08:00.393 Read Only: No 00:08:00.393 Volatile Memory Backup: OK 00:08:00.393 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.393 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.393 Available Spare: 0% 00:08:00.393 Available Spare Threshold: 0% 00:08:00.393 Life Percentage Used: 0% 00:08:00.393 Data Units Read: 629 00:08:00.393 Data Units Written: 557 00:08:00.393 Host Read Commands: 33072 00:08:00.393 Host Write Commands: 32858 00:08:00.393 Controller Busy Time: 0 minutes 00:08:00.393 Power Cycles: 0 00:08:00.393 Power On Hours: 0 hours 00:08:00.393 Unsafe Shutdowns: 0 00:08:00.393 Unrecoverable Media Errors: 0 00:08:00.393 Lifetime Error Log Entries: 0 00:08:00.393 Warning Temperature Time: 0 minutes 00:08:00.393 Critical Temperature Time: 0 minutes 00:08:00.393 00:08:00.393 Number of Queues 00:08:00.393 ================ 00:08:00.393 Number of I/O Submission Queues: 64 00:08:00.393 Number of I/O Completion Queues: 64 00:08:00.393 00:08:00.393 ZNS Specific Controller Data 00:08:00.393 ============================ 00:08:00.393 Zone Append Size Limit: 0 00:08:00.393 00:08:00.393 00:08:00.393 Active Namespaces 00:08:00.393 ================= 00:08:00.393 Namespace ID:1 00:08:00.393 Error Recovery Timeout: Unlimited 00:08:00.393 Command Set Identifier: NVM (00h) 00:08:00.393 Deallocate: Supported 00:08:00.393 Deallocated/Unwritten Error: Supported 00:08:00.393 Deallocated Read Value: All 0x00 00:08:00.393 Deallocate in Write Zeroes: Not Supported 00:08:00.393 Deallocated Guard Field: 0xFFFF 00:08:00.393 Flush: Supported 00:08:00.393 Reservation: Not Supported 00:08:00.393 Metadata Transferred as: Separate Metadata Buffer 00:08:00.393 Namespace Sharing Capabilities: Private 00:08:00.393 Size (in LBAs): 1548666 (5GiB) 00:08:00.393 Capacity (in LBAs): 1548666 (5GiB) 00:08:00.393 Utilization (in LBAs): 1548666 (5GiB) 00:08:00.393 Thin Provisioning: Not Supported 00:08:00.393 Per-NS Atomic Units: No 00:08:00.393 Maximum Single Source Range Length: 128 00:08:00.393 Maximum Copy Length: 128 00:08:00.393 Maximum Source Range Count: 128 00:08:00.393 NGUID/EUI64 Never Reused: No 00:08:00.393 Namespace Write Protected: No 00:08:00.393 Number of LBA Formats: 8 00:08:00.393 Current LBA Format: LBA Format #07 00:08:00.393 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.393 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.393 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.393 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.393 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.393 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.393 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.393 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.393 00:08:00.393 NVM Specific Namespace Data 00:08:00.393 =========================== 00:08:00.393 Logical Block Storage Tag Mask: 0 00:08:00.393 Protection Information Capabilities: 00:08:00.393 16b Guard Protection Information Storage Tag Support: No 00:08:00.393 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.393 Storage Tag Check Read Support: No 00:08:00.393 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.393 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.393 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:00.655 ===================================================== 00:08:00.655 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.655 ===================================================== 00:08:00.655 Controller Capabilities/Features 00:08:00.655 ================================ 00:08:00.655 Vendor ID: 1b36 00:08:00.655 Subsystem Vendor ID: 1af4 00:08:00.655 Serial Number: 12341 00:08:00.655 Model Number: QEMU NVMe Ctrl 00:08:00.655 Firmware Version: 8.0.0 00:08:00.655 Recommended Arb Burst: 6 00:08:00.655 IEEE OUI Identifier: 00 54 52 00:08:00.655 Multi-path I/O 00:08:00.655 May have multiple subsystem ports: No 00:08:00.655 May have multiple controllers: No 00:08:00.655 Associated with SR-IOV VF: No 00:08:00.655 Max Data Transfer Size: 524288 00:08:00.655 Max Number of Namespaces: 256 00:08:00.655 Max Number of I/O Queues: 64 00:08:00.655 NVMe Specification Version (VS): 1.4 00:08:00.655 NVMe Specification Version (Identify): 1.4 00:08:00.655 Maximum Queue Entries: 2048 00:08:00.655 Contiguous Queues Required: Yes 00:08:00.656 Arbitration Mechanisms Supported 00:08:00.656 Weighted Round Robin: Not Supported 00:08:00.656 Vendor Specific: Not Supported 00:08:00.656 Reset Timeout: 7500 ms 00:08:00.656 Doorbell Stride: 4 bytes 00:08:00.656 NVM Subsystem Reset: Not Supported 00:08:00.656 Command Sets Supported 00:08:00.656 NVM Command Set: Supported 00:08:00.656 Boot Partition: Not Supported 00:08:00.656 Memory Page Size Minimum: 4096 bytes 00:08:00.656 Memory Page Size Maximum: 65536 bytes 00:08:00.656 Persistent Memory Region: Not Supported 00:08:00.656 Optional Asynchronous Events Supported 00:08:00.656 Namespace Attribute Notices: Supported 00:08:00.656 Firmware Activation Notices: Not Supported 00:08:00.656 ANA Change Notices: Not Supported 00:08:00.656 PLE Aggregate Log Change Notices: Not Supported 00:08:00.656 LBA Status Info Alert Notices: Not Supported 00:08:00.656 EGE Aggregate Log Change Notices: Not Supported 00:08:00.656 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.656 Zone Descriptor Change Notices: Not Supported 00:08:00.656 Discovery Log Change Notices: Not Supported 00:08:00.656 Controller Attributes 00:08:00.656 128-bit Host Identifier: Not Supported 00:08:00.656 Non-Operational Permissive Mode: Not Supported 00:08:00.656 NVM Sets: Not Supported 00:08:00.656 Read Recovery Levels: Not Supported 00:08:00.656 Endurance Groups: Not Supported 00:08:00.656 Predictable Latency Mode: Not Supported 00:08:00.656 Traffic Based Keep ALive: Not Supported 00:08:00.656 Namespace Granularity: Not Supported 00:08:00.656 SQ Associations: Not Supported 00:08:00.656 UUID List: Not Supported 00:08:00.656 Multi-Domain Subsystem: Not Supported 00:08:00.656 Fixed Capacity Management: Not Supported 00:08:00.656 Variable Capacity Management: Not Supported 00:08:00.656 Delete Endurance Group: Not Supported 00:08:00.656 Delete NVM Set: Not Supported 00:08:00.656 Extended LBA Formats Supported: Supported 00:08:00.656 Flexible Data Placement Supported: Not Supported 00:08:00.656 00:08:00.656 Controller Memory Buffer Support 00:08:00.656 ================================ 00:08:00.656 Supported: No 00:08:00.656 00:08:00.656 Persistent Memory Region Support 00:08:00.656 ================================ 00:08:00.656 Supported: No 00:08:00.656 00:08:00.656 Admin Command Set Attributes 00:08:00.656 ============================ 00:08:00.656 Security Send/Receive: Not Supported 00:08:00.656 Format NVM: Supported 00:08:00.656 Firmware Activate/Download: Not Supported 00:08:00.656 Namespace Management: Supported 00:08:00.656 Device Self-Test: Not Supported 00:08:00.656 Directives: Supported 00:08:00.656 NVMe-MI: Not Supported 00:08:00.656 Virtualization Management: Not Supported 00:08:00.656 Doorbell Buffer Config: Supported 00:08:00.656 Get LBA Status Capability: Not Supported 00:08:00.656 Command & Feature Lockdown Capability: Not Supported 00:08:00.656 Abort Command Limit: 4 00:08:00.656 Async Event Request Limit: 4 00:08:00.656 Number of Firmware Slots: N/A 00:08:00.656 Firmware Slot 1 Read-Only: N/A 00:08:00.656 Firmware Activation Without Reset: N/A 00:08:00.656 Multiple Update Detection Support: N/A 00:08:00.656 Firmware Update Granularity: No Information Provided 00:08:00.656 Per-Namespace SMART Log: Yes 00:08:00.656 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.656 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:00.656 Command Effects Log Page: Supported 00:08:00.656 Get Log Page Extended Data: Supported 00:08:00.656 Telemetry Log Pages: Not Supported 00:08:00.656 Persistent Event Log Pages: Not Supported 00:08:00.656 Supported Log Pages Log Page: May Support 00:08:00.656 Commands Supported & Effects Log Page: Not Supported 00:08:00.656 Feature Identifiers & Effects Log Page:May Support 00:08:00.656 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.656 Data Area 4 for Telemetry Log: Not Supported 00:08:00.656 Error Log Page Entries Supported: 1 00:08:00.656 Keep Alive: Not Supported 00:08:00.656 00:08:00.656 NVM Command Set Attributes 00:08:00.656 ========================== 00:08:00.656 Submission Queue Entry Size 00:08:00.656 Max: 64 00:08:00.656 Min: 64 00:08:00.656 Completion Queue Entry Size 00:08:00.656 Max: 16 00:08:00.656 Min: 16 00:08:00.656 Number of Namespaces: 256 00:08:00.656 Compare Command: Supported 00:08:00.656 Write Uncorrectable Command: Not Supported 00:08:00.656 Dataset Management Command: Supported 00:08:00.656 Write Zeroes Command: Supported 00:08:00.656 Set Features Save Field: Supported 00:08:00.656 Reservations: Not Supported 00:08:00.656 Timestamp: Supported 00:08:00.656 Copy: Supported 00:08:00.656 Volatile Write Cache: Present 00:08:00.656 Atomic Write Unit (Normal): 1 00:08:00.656 Atomic Write Unit (PFail): 1 00:08:00.656 Atomic Compare & Write Unit: 1 00:08:00.656 Fused Compare & Write: Not Supported 00:08:00.656 Scatter-Gather List 00:08:00.656 SGL Command Set: Supported 00:08:00.656 SGL Keyed: Not Supported 00:08:00.656 SGL Bit Bucket Descriptor: Not Supported 00:08:00.656 SGL Metadata Pointer: Not Supported 00:08:00.656 Oversized SGL: Not Supported 00:08:00.656 SGL Metadata Address: Not Supported 00:08:00.656 SGL Offset: Not Supported 00:08:00.656 Transport SGL Data Block: Not Supported 00:08:00.656 Replay Protected Memory Block: Not Supported 00:08:00.656 00:08:00.656 Firmware Slot Information 00:08:00.656 ========================= 00:08:00.656 Active slot: 1 00:08:00.656 Slot 1 Firmware Revision: 1.0 00:08:00.656 00:08:00.656 00:08:00.656 Commands Supported and Effects 00:08:00.656 ============================== 00:08:00.656 Admin Commands 00:08:00.656 -------------- 00:08:00.656 Delete I/O Submission Queue (00h): Supported 00:08:00.656 Create I/O Submission Queue (01h): Supported 00:08:00.656 Get Log Page (02h): Supported 00:08:00.656 Delete I/O Completion Queue (04h): Supported 00:08:00.656 Create I/O Completion Queue (05h): Supported 00:08:00.656 Identify (06h): Supported 00:08:00.656 Abort (08h): Supported 00:08:00.656 Set Features (09h): Supported 00:08:00.656 Get Features (0Ah): Supported 00:08:00.656 Asynchronous Event Request (0Ch): Supported 00:08:00.656 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.656 Directive Send (19h): Supported 00:08:00.656 Directive Receive (1Ah): Supported 00:08:00.656 Virtualization Management (1Ch): Supported 00:08:00.656 Doorbell Buffer Config (7Ch): Supported 00:08:00.656 Format NVM (80h): Supported LBA-Change 00:08:00.656 I/O Commands 00:08:00.656 ------------ 00:08:00.656 Flush (00h): Supported LBA-Change 00:08:00.656 Write (01h): Supported LBA-Change 00:08:00.656 Read (02h): Supported 00:08:00.656 Compare (05h): Supported 00:08:00.656 Write Zeroes (08h): Supported LBA-Change 00:08:00.656 Dataset Management (09h): Supported LBA-Change 00:08:00.656 Unknown (0Ch): Supported 00:08:00.656 Unknown (12h): Supported 00:08:00.656 Copy (19h): Supported LBA-Change 00:08:00.656 Unknown (1Dh): Supported LBA-Change 00:08:00.656 00:08:00.656 Error Log 00:08:00.656 ========= 00:08:00.656 00:08:00.656 Arbitration 00:08:00.656 =========== 00:08:00.656 Arbitration Burst: no limit 00:08:00.656 00:08:00.656 Power Management 00:08:00.656 ================ 00:08:00.656 Number of Power States: 1 00:08:00.656 Current Power State: Power State #0 00:08:00.656 Power State #0: 00:08:00.656 Max Power: 25.00 W 00:08:00.656 Non-Operational State: Operational 00:08:00.656 Entry Latency: 16 microseconds 00:08:00.656 Exit Latency: 4 microseconds 00:08:00.656 Relative Read Throughput: 0 00:08:00.656 Relative Read Latency: 0 00:08:00.656 Relative Write Throughput: 0 00:08:00.656 Relative Write Latency: 0 00:08:00.656 Idle Power: Not Reported 00:08:00.656 Active Power: Not Reported 00:08:00.656 Non-Operational Permissive Mode: Not Supported 00:08:00.656 00:08:00.656 Health Information 00:08:00.656 ================== 00:08:00.656 Critical Warnings: 00:08:00.656 Available Spare Space: OK 00:08:00.656 Temperature: OK 00:08:00.656 Device Reliability: OK 00:08:00.656 Read Only: No 00:08:00.656 Volatile Memory Backup: OK 00:08:00.656 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.656 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.656 Available Spare: 0% 00:08:00.656 Available Spare Threshold: 0% 00:08:00.656 Life Percentage Used: 0% 00:08:00.656 Data Units Read: 966 00:08:00.656 Data Units Written: 835 00:08:00.656 Host Read Commands: 49622 00:08:00.656 Host Write Commands: 48472 00:08:00.656 Controller Busy Time: 0 minutes 00:08:00.656 Power Cycles: 0 00:08:00.657 Power On Hours: 0 hours 00:08:00.657 Unsafe Shutdowns: 0 00:08:00.657 Unrecoverable Media Errors: 0 00:08:00.657 Lifetime Error Log Entries: 0 00:08:00.657 Warning Temperature Time: 0 minutes 00:08:00.657 Critical Temperature Time: 0 minutes 00:08:00.657 00:08:00.657 Number of Queues 00:08:00.657 ================ 00:08:00.657 Number of I/O Submission Queues: 64 00:08:00.657 Number of I/O Completion Queues: 64 00:08:00.657 00:08:00.657 ZNS Specific Controller Data 00:08:00.657 ============================ 00:08:00.657 Zone Append Size Limit: 0 00:08:00.657 00:08:00.657 00:08:00.657 Active Namespaces 00:08:00.657 ================= 00:08:00.657 Namespace ID:1 00:08:00.657 Error Recovery Timeout: Unlimited 00:08:00.657 Command Set Identifier: NVM (00h) 00:08:00.657 Deallocate: Supported 00:08:00.657 Deallocated/Unwritten Error: Supported 00:08:00.657 Deallocated Read Value: All 0x00 00:08:00.657 Deallocate in Write Zeroes: Not Supported 00:08:00.657 Deallocated Guard Field: 0xFFFF 00:08:00.657 Flush: Supported 00:08:00.657 Reservation: Not Supported 00:08:00.657 Namespace Sharing Capabilities: Private 00:08:00.657 Size (in LBAs): 1310720 (5GiB) 00:08:00.657 Capacity (in LBAs): 1310720 (5GiB) 00:08:00.657 Utilization (in LBAs): 1310720 (5GiB) 00:08:00.657 Thin Provisioning: Not Supported 00:08:00.657 Per-NS Atomic Units: No 00:08:00.657 Maximum Single Source Range Length: 128 00:08:00.657 Maximum Copy Length: 128 00:08:00.657 Maximum Source Range Count: 128 00:08:00.657 NGUID/EUI64 Never Reused: No 00:08:00.657 Namespace Write Protected: No 00:08:00.657 Number of LBA Formats: 8 00:08:00.657 Current LBA Format: LBA Format #04 00:08:00.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.657 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.657 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.657 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.657 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.657 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.657 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.657 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.657 00:08:00.657 NVM Specific Namespace Data 00:08:00.657 =========================== 00:08:00.657 Logical Block Storage Tag Mask: 0 00:08:00.657 Protection Information Capabilities: 00:08:00.657 16b Guard Protection Information Storage Tag Support: No 00:08:00.657 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.657 Storage Tag Check Read Support: No 00:08:00.657 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.657 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.657 09:20:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:00.919 ===================================================== 00:08:00.919 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.919 ===================================================== 00:08:00.919 Controller Capabilities/Features 00:08:00.919 ================================ 00:08:00.919 Vendor ID: 1b36 00:08:00.919 Subsystem Vendor ID: 1af4 00:08:00.919 Serial Number: 12342 00:08:00.919 Model Number: QEMU NVMe Ctrl 00:08:00.919 Firmware Version: 8.0.0 00:08:00.919 Recommended Arb Burst: 6 00:08:00.919 IEEE OUI Identifier: 00 54 52 00:08:00.919 Multi-path I/O 00:08:00.919 May have multiple subsystem ports: No 00:08:00.919 May have multiple controllers: No 00:08:00.919 Associated with SR-IOV VF: No 00:08:00.919 Max Data Transfer Size: 524288 00:08:00.919 Max Number of Namespaces: 256 00:08:00.919 Max Number of I/O Queues: 64 00:08:00.919 NVMe Specification Version (VS): 1.4 00:08:00.919 NVMe Specification Version (Identify): 1.4 00:08:00.919 Maximum Queue Entries: 2048 00:08:00.919 Contiguous Queues Required: Yes 00:08:00.919 Arbitration Mechanisms Supported 00:08:00.919 Weighted Round Robin: Not Supported 00:08:00.919 Vendor Specific: Not Supported 00:08:00.919 Reset Timeout: 7500 ms 00:08:00.919 Doorbell Stride: 4 bytes 00:08:00.919 NVM Subsystem Reset: Not Supported 00:08:00.919 Command Sets Supported 00:08:00.919 NVM Command Set: Supported 00:08:00.919 Boot Partition: Not Supported 00:08:00.919 Memory Page Size Minimum: 4096 bytes 00:08:00.919 Memory Page Size Maximum: 65536 bytes 00:08:00.919 Persistent Memory Region: Not Supported 00:08:00.919 Optional Asynchronous Events Supported 00:08:00.919 Namespace Attribute Notices: Supported 00:08:00.919 Firmware Activation Notices: Not Supported 00:08:00.919 ANA Change Notices: Not Supported 00:08:00.919 PLE Aggregate Log Change Notices: Not Supported 00:08:00.919 LBA Status Info Alert Notices: Not Supported 00:08:00.919 EGE Aggregate Log Change Notices: Not Supported 00:08:00.919 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.919 Zone Descriptor Change Notices: Not Supported 00:08:00.919 Discovery Log Change Notices: Not Supported 00:08:00.919 Controller Attributes 00:08:00.919 128-bit Host Identifier: Not Supported 00:08:00.919 Non-Operational Permissive Mode: Not Supported 00:08:00.919 NVM Sets: Not Supported 00:08:00.919 Read Recovery Levels: Not Supported 00:08:00.919 Endurance Groups: Not Supported 00:08:00.919 Predictable Latency Mode: Not Supported 00:08:00.919 Traffic Based Keep ALive: Not Supported 00:08:00.919 Namespace Granularity: Not Supported 00:08:00.919 SQ Associations: Not Supported 00:08:00.919 UUID List: Not Supported 00:08:00.919 Multi-Domain Subsystem: Not Supported 00:08:00.919 Fixed Capacity Management: Not Supported 00:08:00.919 Variable Capacity Management: Not Supported 00:08:00.919 Delete Endurance Group: Not Supported 00:08:00.919 Delete NVM Set: Not Supported 00:08:00.919 Extended LBA Formats Supported: Supported 00:08:00.919 Flexible Data Placement Supported: Not Supported 00:08:00.919 00:08:00.919 Controller Memory Buffer Support 00:08:00.919 ================================ 00:08:00.919 Supported: No 00:08:00.919 00:08:00.919 Persistent Memory Region Support 00:08:00.919 ================================ 00:08:00.919 Supported: No 00:08:00.919 00:08:00.919 Admin Command Set Attributes 00:08:00.919 ============================ 00:08:00.919 Security Send/Receive: Not Supported 00:08:00.919 Format NVM: Supported 00:08:00.919 Firmware Activate/Download: Not Supported 00:08:00.919 Namespace Management: Supported 00:08:00.919 Device Self-Test: Not Supported 00:08:00.919 Directives: Supported 00:08:00.919 NVMe-MI: Not Supported 00:08:00.919 Virtualization Management: Not Supported 00:08:00.919 Doorbell Buffer Config: Supported 00:08:00.919 Get LBA Status Capability: Not Supported 00:08:00.919 Command & Feature Lockdown Capability: Not Supported 00:08:00.919 Abort Command Limit: 4 00:08:00.919 Async Event Request Limit: 4 00:08:00.919 Number of Firmware Slots: N/A 00:08:00.919 Firmware Slot 1 Read-Only: N/A 00:08:00.919 Firmware Activation Without Reset: N/A 00:08:00.919 Multiple Update Detection Support: N/A 00:08:00.919 Firmware Update Granularity: No Information Provided 00:08:00.919 Per-Namespace SMART Log: Yes 00:08:00.919 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.919 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:00.919 Command Effects Log Page: Supported 00:08:00.919 Get Log Page Extended Data: Supported 00:08:00.919 Telemetry Log Pages: Not Supported 00:08:00.919 Persistent Event Log Pages: Not Supported 00:08:00.919 Supported Log Pages Log Page: May Support 00:08:00.919 Commands Supported & Effects Log Page: Not Supported 00:08:00.919 Feature Identifiers & Effects Log Page:May Support 00:08:00.919 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.919 Data Area 4 for Telemetry Log: Not Supported 00:08:00.919 Error Log Page Entries Supported: 1 00:08:00.919 Keep Alive: Not Supported 00:08:00.919 00:08:00.919 NVM Command Set Attributes 00:08:00.919 ========================== 00:08:00.919 Submission Queue Entry Size 00:08:00.919 Max: 64 00:08:00.919 Min: 64 00:08:00.919 Completion Queue Entry Size 00:08:00.919 Max: 16 00:08:00.919 Min: 16 00:08:00.919 Number of Namespaces: 256 00:08:00.919 Compare Command: Supported 00:08:00.920 Write Uncorrectable Command: Not Supported 00:08:00.920 Dataset Management Command: Supported 00:08:00.920 Write Zeroes Command: Supported 00:08:00.920 Set Features Save Field: Supported 00:08:00.920 Reservations: Not Supported 00:08:00.920 Timestamp: Supported 00:08:00.920 Copy: Supported 00:08:00.920 Volatile Write Cache: Present 00:08:00.920 Atomic Write Unit (Normal): 1 00:08:00.920 Atomic Write Unit (PFail): 1 00:08:00.920 Atomic Compare & Write Unit: 1 00:08:00.920 Fused Compare & Write: Not Supported 00:08:00.920 Scatter-Gather List 00:08:00.920 SGL Command Set: Supported 00:08:00.920 SGL Keyed: Not Supported 00:08:00.920 SGL Bit Bucket Descriptor: Not Supported 00:08:00.920 SGL Metadata Pointer: Not Supported 00:08:00.920 Oversized SGL: Not Supported 00:08:00.920 SGL Metadata Address: Not Supported 00:08:00.920 SGL Offset: Not Supported 00:08:00.920 Transport SGL Data Block: Not Supported 00:08:00.920 Replay Protected Memory Block: Not Supported 00:08:00.920 00:08:00.920 Firmware Slot Information 00:08:00.920 ========================= 00:08:00.920 Active slot: 1 00:08:00.920 Slot 1 Firmware Revision: 1.0 00:08:00.920 00:08:00.920 00:08:00.920 Commands Supported and Effects 00:08:00.920 ============================== 00:08:00.920 Admin Commands 00:08:00.920 -------------- 00:08:00.920 Delete I/O Submission Queue (00h): Supported 00:08:00.920 Create I/O Submission Queue (01h): Supported 00:08:00.920 Get Log Page (02h): Supported 00:08:00.920 Delete I/O Completion Queue (04h): Supported 00:08:00.920 Create I/O Completion Queue (05h): Supported 00:08:00.920 Identify (06h): Supported 00:08:00.920 Abort (08h): Supported 00:08:00.920 Set Features (09h): Supported 00:08:00.920 Get Features (0Ah): Supported 00:08:00.920 Asynchronous Event Request (0Ch): Supported 00:08:00.920 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.920 Directive Send (19h): Supported 00:08:00.920 Directive Receive (1Ah): Supported 00:08:00.920 Virtualization Management (1Ch): Supported 00:08:00.920 Doorbell Buffer Config (7Ch): Supported 00:08:00.920 Format NVM (80h): Supported LBA-Change 00:08:00.920 I/O Commands 00:08:00.920 ------------ 00:08:00.920 Flush (00h): Supported LBA-Change 00:08:00.920 Write (01h): Supported LBA-Change 00:08:00.920 Read (02h): Supported 00:08:00.920 Compare (05h): Supported 00:08:00.920 Write Zeroes (08h): Supported LBA-Change 00:08:00.920 Dataset Management (09h): Supported LBA-Change 00:08:00.920 Unknown (0Ch): Supported 00:08:00.920 Unknown (12h): Supported 00:08:00.920 Copy (19h): Supported LBA-Change 00:08:00.920 Unknown (1Dh): Supported LBA-Change 00:08:00.920 00:08:00.920 Error Log 00:08:00.920 ========= 00:08:00.920 00:08:00.920 Arbitration 00:08:00.920 =========== 00:08:00.920 Arbitration Burst: no limit 00:08:00.920 00:08:00.920 Power Management 00:08:00.920 ================ 00:08:00.920 Number of Power States: 1 00:08:00.920 Current Power State: Power State #0 00:08:00.920 Power State #0: 00:08:00.920 Max Power: 25.00 W 00:08:00.920 Non-Operational State: Operational 00:08:00.920 Entry Latency: 16 microseconds 00:08:00.920 Exit Latency: 4 microseconds 00:08:00.920 Relative Read Throughput: 0 00:08:00.920 Relative Read Latency: 0 00:08:00.920 Relative Write Throughput: 0 00:08:00.920 Relative Write Latency: 0 00:08:00.920 Idle Power: Not Reported 00:08:00.920 Active Power: Not Reported 00:08:00.920 Non-Operational Permissive Mode: Not Supported 00:08:00.920 00:08:00.920 Health Information 00:08:00.920 ================== 00:08:00.920 Critical Warnings: 00:08:00.920 Available Spare Space: OK 00:08:00.920 Temperature: OK 00:08:00.920 Device Reliability: OK 00:08:00.920 Read Only: No 00:08:00.920 Volatile Memory Backup: OK 00:08:00.920 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.920 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.920 Available Spare: 0% 00:08:00.920 Available Spare Threshold: 0% 00:08:00.920 Life Percentage Used: 0% 00:08:00.920 Data Units Read: 1970 00:08:00.920 Data Units Written: 1757 00:08:00.920 Host Read Commands: 101067 00:08:00.920 Host Write Commands: 99336 00:08:00.920 Controller Busy Time: 0 minutes 00:08:00.920 Power Cycles: 0 00:08:00.920 Power On Hours: 0 hours 00:08:00.920 Unsafe Shutdowns: 0 00:08:00.920 Unrecoverable Media Errors: 0 00:08:00.920 Lifetime Error Log Entries: 0 00:08:00.920 Warning Temperature Time: 0 minutes 00:08:00.920 Critical Temperature Time: 0 minutes 00:08:00.920 00:08:00.920 Number of Queues 00:08:00.920 ================ 00:08:00.920 Number of I/O Submission Queues: 64 00:08:00.920 Number of I/O Completion Queues: 64 00:08:00.920 00:08:00.920 ZNS Specific Controller Data 00:08:00.920 ============================ 00:08:00.920 Zone Append Size Limit: 0 00:08:00.920 00:08:00.920 00:08:00.920 Active Namespaces 00:08:00.920 ================= 00:08:00.920 Namespace ID:1 00:08:00.920 Error Recovery Timeout: Unlimited 00:08:00.920 Command Set Identifier: NVM (00h) 00:08:00.920 Deallocate: Supported 00:08:00.920 Deallocated/Unwritten Error: Supported 00:08:00.920 Deallocated Read Value: All 0x00 00:08:00.920 Deallocate in Write Zeroes: Not Supported 00:08:00.920 Deallocated Guard Field: 0xFFFF 00:08:00.920 Flush: Supported 00:08:00.920 Reservation: Not Supported 00:08:00.920 Namespace Sharing Capabilities: Private 00:08:00.920 Size (in LBAs): 1048576 (4GiB) 00:08:00.920 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.920 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.920 Thin Provisioning: Not Supported 00:08:00.920 Per-NS Atomic Units: No 00:08:00.920 Maximum Single Source Range Length: 128 00:08:00.920 Maximum Copy Length: 128 00:08:00.920 Maximum Source Range Count: 128 00:08:00.920 NGUID/EUI64 Never Reused: No 00:08:00.920 Namespace Write Protected: No 00:08:00.920 Number of LBA Formats: 8 00:08:00.920 Current LBA Format: LBA Format #04 00:08:00.920 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.920 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.920 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.920 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.920 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.920 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.920 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.920 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.920 00:08:00.920 NVM Specific Namespace Data 00:08:00.920 =========================== 00:08:00.920 Logical Block Storage Tag Mask: 0 00:08:00.920 Protection Information Capabilities: 00:08:00.920 16b Guard Protection Information Storage Tag Support: No 00:08:00.920 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.920 Storage Tag Check Read Support: No 00:08:00.920 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.920 Namespace ID:2 00:08:00.920 Error Recovery Timeout: Unlimited 00:08:00.920 Command Set Identifier: NVM (00h) 00:08:00.920 Deallocate: Supported 00:08:00.920 Deallocated/Unwritten Error: Supported 00:08:00.920 Deallocated Read Value: All 0x00 00:08:00.920 Deallocate in Write Zeroes: Not Supported 00:08:00.920 Deallocated Guard Field: 0xFFFF 00:08:00.920 Flush: Supported 00:08:00.920 Reservation: Not Supported 00:08:00.920 Namespace Sharing Capabilities: Private 00:08:00.920 Size (in LBAs): 1048576 (4GiB) 00:08:00.920 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.920 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.920 Thin Provisioning: Not Supported 00:08:00.920 Per-NS Atomic Units: No 00:08:00.920 Maximum Single Source Range Length: 128 00:08:00.920 Maximum Copy Length: 128 00:08:00.920 Maximum Source Range Count: 128 00:08:00.920 NGUID/EUI64 Never Reused: No 00:08:00.921 Namespace Write Protected: No 00:08:00.921 Number of LBA Formats: 8 00:08:00.921 Current LBA Format: LBA Format #04 00:08:00.921 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.921 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.921 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.921 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.921 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.921 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.921 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.921 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.921 00:08:00.921 NVM Specific Namespace Data 00:08:00.921 =========================== 00:08:00.921 Logical Block Storage Tag Mask: 0 00:08:00.921 Protection Information Capabilities: 00:08:00.921 16b Guard Protection Information Storage Tag Support: No 00:08:00.921 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.921 Storage Tag Check Read Support: No 00:08:00.921 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Namespace ID:3 00:08:00.921 Error Recovery Timeout: Unlimited 00:08:00.921 Command Set Identifier: NVM (00h) 00:08:00.921 Deallocate: Supported 00:08:00.921 Deallocated/Unwritten Error: Supported 00:08:00.921 Deallocated Read Value: All 0x00 00:08:00.921 Deallocate in Write Zeroes: Not Supported 00:08:00.921 Deallocated Guard Field: 0xFFFF 00:08:00.921 Flush: Supported 00:08:00.921 Reservation: Not Supported 00:08:00.921 Namespace Sharing Capabilities: Private 00:08:00.921 Size (in LBAs): 1048576 (4GiB) 00:08:00.921 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.921 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.921 Thin Provisioning: Not Supported 00:08:00.921 Per-NS Atomic Units: No 00:08:00.921 Maximum Single Source Range Length: 128 00:08:00.921 Maximum Copy Length: 128 00:08:00.921 Maximum Source Range Count: 128 00:08:00.921 NGUID/EUI64 Never Reused: No 00:08:00.921 Namespace Write Protected: No 00:08:00.921 Number of LBA Formats: 8 00:08:00.921 Current LBA Format: LBA Format #04 00:08:00.921 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.921 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.921 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.921 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.921 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.921 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.921 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.921 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.921 00:08:00.921 NVM Specific Namespace Data 00:08:00.921 =========================== 00:08:00.921 Logical Block Storage Tag Mask: 0 00:08:00.921 Protection Information Capabilities: 00:08:00.921 16b Guard Protection Information Storage Tag Support: No 00:08:00.921 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.921 Storage Tag Check Read Support: No 00:08:00.921 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.921 09:20:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.921 09:20:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:01.184 ===================================================== 00:08:01.184 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.184 ===================================================== 00:08:01.184 Controller Capabilities/Features 00:08:01.184 ================================ 00:08:01.184 Vendor ID: 1b36 00:08:01.184 Subsystem Vendor ID: 1af4 00:08:01.184 Serial Number: 12343 00:08:01.184 Model Number: QEMU NVMe Ctrl 00:08:01.184 Firmware Version: 8.0.0 00:08:01.184 Recommended Arb Burst: 6 00:08:01.184 IEEE OUI Identifier: 00 54 52 00:08:01.184 Multi-path I/O 00:08:01.184 May have multiple subsystem ports: No 00:08:01.184 May have multiple controllers: Yes 00:08:01.184 Associated with SR-IOV VF: No 00:08:01.184 Max Data Transfer Size: 524288 00:08:01.184 Max Number of Namespaces: 256 00:08:01.184 Max Number of I/O Queues: 64 00:08:01.184 NVMe Specification Version (VS): 1.4 00:08:01.184 NVMe Specification Version (Identify): 1.4 00:08:01.184 Maximum Queue Entries: 2048 00:08:01.184 Contiguous Queues Required: Yes 00:08:01.184 Arbitration Mechanisms Supported 00:08:01.184 Weighted Round Robin: Not Supported 00:08:01.184 Vendor Specific: Not Supported 00:08:01.184 Reset Timeout: 7500 ms 00:08:01.184 Doorbell Stride: 4 bytes 00:08:01.184 NVM Subsystem Reset: Not Supported 00:08:01.184 Command Sets Supported 00:08:01.184 NVM Command Set: Supported 00:08:01.184 Boot Partition: Not Supported 00:08:01.184 Memory Page Size Minimum: 4096 bytes 00:08:01.184 Memory Page Size Maximum: 65536 bytes 00:08:01.184 Persistent Memory Region: Not Supported 00:08:01.184 Optional Asynchronous Events Supported 00:08:01.184 Namespace Attribute Notices: Supported 00:08:01.184 Firmware Activation Notices: Not Supported 00:08:01.184 ANA Change Notices: Not Supported 00:08:01.184 PLE Aggregate Log Change Notices: Not Supported 00:08:01.184 LBA Status Info Alert Notices: Not Supported 00:08:01.184 EGE Aggregate Log Change Notices: Not Supported 00:08:01.184 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.184 Zone Descriptor Change Notices: Not Supported 00:08:01.184 Discovery Log Change Notices: Not Supported 00:08:01.184 Controller Attributes 00:08:01.184 128-bit Host Identifier: Not Supported 00:08:01.184 Non-Operational Permissive Mode: Not Supported 00:08:01.184 NVM Sets: Not Supported 00:08:01.184 Read Recovery Levels: Not Supported 00:08:01.184 Endurance Groups: Supported 00:08:01.184 Predictable Latency Mode: Not Supported 00:08:01.184 Traffic Based Keep ALive: Not Supported 00:08:01.184 Namespace Granularity: Not Supported 00:08:01.184 SQ Associations: Not Supported 00:08:01.184 UUID List: Not Supported 00:08:01.184 Multi-Domain Subsystem: Not Supported 00:08:01.184 Fixed Capacity Management: Not Supported 00:08:01.184 Variable Capacity Management: Not Supported 00:08:01.184 Delete Endurance Group: Not Supported 00:08:01.184 Delete NVM Set: Not Supported 00:08:01.184 Extended LBA Formats Supported: Supported 00:08:01.184 Flexible Data Placement Supported: Supported 00:08:01.184 00:08:01.184 Controller Memory Buffer Support 00:08:01.184 ================================ 00:08:01.184 Supported: No 00:08:01.184 00:08:01.184 Persistent Memory Region Support 00:08:01.184 ================================ 00:08:01.184 Supported: No 00:08:01.184 00:08:01.184 Admin Command Set Attributes 00:08:01.184 ============================ 00:08:01.184 Security Send/Receive: Not Supported 00:08:01.184 Format NVM: Supported 00:08:01.184 Firmware Activate/Download: Not Supported 00:08:01.184 Namespace Management: Supported 00:08:01.184 Device Self-Test: Not Supported 00:08:01.184 Directives: Supported 00:08:01.184 NVMe-MI: Not Supported 00:08:01.184 Virtualization Management: Not Supported 00:08:01.184 Doorbell Buffer Config: Supported 00:08:01.184 Get LBA Status Capability: Not Supported 00:08:01.184 Command & Feature Lockdown Capability: Not Supported 00:08:01.184 Abort Command Limit: 4 00:08:01.184 Async Event Request Limit: 4 00:08:01.184 Number of Firmware Slots: N/A 00:08:01.184 Firmware Slot 1 Read-Only: N/A 00:08:01.184 Firmware Activation Without Reset: N/A 00:08:01.184 Multiple Update Detection Support: N/A 00:08:01.184 Firmware Update Granularity: No Information Provided 00:08:01.184 Per-Namespace SMART Log: Yes 00:08:01.184 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.184 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:01.184 Command Effects Log Page: Supported 00:08:01.184 Get Log Page Extended Data: Supported 00:08:01.184 Telemetry Log Pages: Not Supported 00:08:01.184 Persistent Event Log Pages: Not Supported 00:08:01.184 Supported Log Pages Log Page: May Support 00:08:01.184 Commands Supported & Effects Log Page: Not Supported 00:08:01.184 Feature Identifiers & Effects Log Page:May Support 00:08:01.184 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.184 Data Area 4 for Telemetry Log: Not Supported 00:08:01.184 Error Log Page Entries Supported: 1 00:08:01.184 Keep Alive: Not Supported 00:08:01.184 00:08:01.184 NVM Command Set Attributes 00:08:01.184 ========================== 00:08:01.184 Submission Queue Entry Size 00:08:01.184 Max: 64 00:08:01.184 Min: 64 00:08:01.184 Completion Queue Entry Size 00:08:01.184 Max: 16 00:08:01.184 Min: 16 00:08:01.184 Number of Namespaces: 256 00:08:01.184 Compare Command: Supported 00:08:01.184 Write Uncorrectable Command: Not Supported 00:08:01.184 Dataset Management Command: Supported 00:08:01.184 Write Zeroes Command: Supported 00:08:01.184 Set Features Save Field: Supported 00:08:01.184 Reservations: Not Supported 00:08:01.184 Timestamp: Supported 00:08:01.184 Copy: Supported 00:08:01.184 Volatile Write Cache: Present 00:08:01.184 Atomic Write Unit (Normal): 1 00:08:01.184 Atomic Write Unit (PFail): 1 00:08:01.184 Atomic Compare & Write Unit: 1 00:08:01.184 Fused Compare & Write: Not Supported 00:08:01.184 Scatter-Gather List 00:08:01.184 SGL Command Set: Supported 00:08:01.184 SGL Keyed: Not Supported 00:08:01.184 SGL Bit Bucket Descriptor: Not Supported 00:08:01.184 SGL Metadata Pointer: Not Supported 00:08:01.184 Oversized SGL: Not Supported 00:08:01.184 SGL Metadata Address: Not Supported 00:08:01.184 SGL Offset: Not Supported 00:08:01.184 Transport SGL Data Block: Not Supported 00:08:01.184 Replay Protected Memory Block: Not Supported 00:08:01.184 00:08:01.184 Firmware Slot Information 00:08:01.184 ========================= 00:08:01.184 Active slot: 1 00:08:01.184 Slot 1 Firmware Revision: 1.0 00:08:01.184 00:08:01.184 00:08:01.184 Commands Supported and Effects 00:08:01.184 ============================== 00:08:01.184 Admin Commands 00:08:01.184 -------------- 00:08:01.184 Delete I/O Submission Queue (00h): Supported 00:08:01.185 Create I/O Submission Queue (01h): Supported 00:08:01.185 Get Log Page (02h): Supported 00:08:01.185 Delete I/O Completion Queue (04h): Supported 00:08:01.185 Create I/O Completion Queue (05h): Supported 00:08:01.185 Identify (06h): Supported 00:08:01.185 Abort (08h): Supported 00:08:01.185 Set Features (09h): Supported 00:08:01.185 Get Features (0Ah): Supported 00:08:01.185 Asynchronous Event Request (0Ch): Supported 00:08:01.185 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.185 Directive Send (19h): Supported 00:08:01.185 Directive Receive (1Ah): Supported 00:08:01.185 Virtualization Management (1Ch): Supported 00:08:01.185 Doorbell Buffer Config (7Ch): Supported 00:08:01.185 Format NVM (80h): Supported LBA-Change 00:08:01.185 I/O Commands 00:08:01.185 ------------ 00:08:01.185 Flush (00h): Supported LBA-Change 00:08:01.185 Write (01h): Supported LBA-Change 00:08:01.185 Read (02h): Supported 00:08:01.185 Compare (05h): Supported 00:08:01.185 Write Zeroes (08h): Supported LBA-Change 00:08:01.185 Dataset Management (09h): Supported LBA-Change 00:08:01.185 Unknown (0Ch): Supported 00:08:01.185 Unknown (12h): Supported 00:08:01.185 Copy (19h): Supported LBA-Change 00:08:01.185 Unknown (1Dh): Supported LBA-Change 00:08:01.185 00:08:01.185 Error Log 00:08:01.185 ========= 00:08:01.185 00:08:01.185 Arbitration 00:08:01.185 =========== 00:08:01.185 Arbitration Burst: no limit 00:08:01.185 00:08:01.185 Power Management 00:08:01.185 ================ 00:08:01.185 Number of Power States: 1 00:08:01.185 Current Power State: Power State #0 00:08:01.185 Power State #0: 00:08:01.185 Max Power: 25.00 W 00:08:01.185 Non-Operational State: Operational 00:08:01.185 Entry Latency: 16 microseconds 00:08:01.185 Exit Latency: 4 microseconds 00:08:01.185 Relative Read Throughput: 0 00:08:01.185 Relative Read Latency: 0 00:08:01.185 Relative Write Throughput: 0 00:08:01.185 Relative Write Latency: 0 00:08:01.185 Idle Power: Not Reported 00:08:01.185 Active Power: Not Reported 00:08:01.185 Non-Operational Permissive Mode: Not Supported 00:08:01.185 00:08:01.185 Health Information 00:08:01.185 ================== 00:08:01.185 Critical Warnings: 00:08:01.185 Available Spare Space: OK 00:08:01.185 Temperature: OK 00:08:01.185 Device Reliability: OK 00:08:01.185 Read Only: No 00:08:01.185 Volatile Memory Backup: OK 00:08:01.185 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.185 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.185 Available Spare: 0% 00:08:01.185 Available Spare Threshold: 0% 00:08:01.185 Life Percentage Used: 0% 00:08:01.185 Data Units Read: 708 00:08:01.185 Data Units Written: 637 00:08:01.185 Host Read Commands: 34231 00:08:01.185 Host Write Commands: 33656 00:08:01.185 Controller Busy Time: 0 minutes 00:08:01.185 Power Cycles: 0 00:08:01.185 Power On Hours: 0 hours 00:08:01.185 Unsafe Shutdowns: 0 00:08:01.185 Unrecoverable Media Errors: 0 00:08:01.185 Lifetime Error Log Entries: 0 00:08:01.185 Warning Temperature Time: 0 minutes 00:08:01.185 Critical Temperature Time: 0 minutes 00:08:01.185 00:08:01.185 Number of Queues 00:08:01.185 ================ 00:08:01.185 Number of I/O Submission Queues: 64 00:08:01.185 Number of I/O Completion Queues: 64 00:08:01.185 00:08:01.185 ZNS Specific Controller Data 00:08:01.185 ============================ 00:08:01.185 Zone Append Size Limit: 0 00:08:01.185 00:08:01.185 00:08:01.185 Active Namespaces 00:08:01.185 ================= 00:08:01.185 Namespace ID:1 00:08:01.185 Error Recovery Timeout: Unlimited 00:08:01.185 Command Set Identifier: NVM (00h) 00:08:01.185 Deallocate: Supported 00:08:01.185 Deallocated/Unwritten Error: Supported 00:08:01.185 Deallocated Read Value: All 0x00 00:08:01.185 Deallocate in Write Zeroes: Not Supported 00:08:01.185 Deallocated Guard Field: 0xFFFF 00:08:01.185 Flush: Supported 00:08:01.185 Reservation: Not Supported 00:08:01.185 Namespace Sharing Capabilities: Multiple Controllers 00:08:01.185 Size (in LBAs): 262144 (1GiB) 00:08:01.185 Capacity (in LBAs): 262144 (1GiB) 00:08:01.185 Utilization (in LBAs): 262144 (1GiB) 00:08:01.185 Thin Provisioning: Not Supported 00:08:01.185 Per-NS Atomic Units: No 00:08:01.185 Maximum Single Source Range Length: 128 00:08:01.185 Maximum Copy Length: 128 00:08:01.185 Maximum Source Range Count: 128 00:08:01.185 NGUID/EUI64 Never Reused: No 00:08:01.185 Namespace Write Protected: No 00:08:01.185 Endurance group ID: 1 00:08:01.185 Number of LBA Formats: 8 00:08:01.185 Current LBA Format: LBA Format #04 00:08:01.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.185 00:08:01.185 Get Feature FDP: 00:08:01.185 ================ 00:08:01.185 Enabled: Yes 00:08:01.185 FDP configuration index: 0 00:08:01.185 00:08:01.185 FDP configurations log page 00:08:01.185 =========================== 00:08:01.185 Number of FDP configurations: 1 00:08:01.185 Version: 0 00:08:01.185 Size: 112 00:08:01.185 FDP Configuration Descriptor: 0 00:08:01.185 Descriptor Size: 96 00:08:01.185 Reclaim Group Identifier format: 2 00:08:01.185 FDP Volatile Write Cache: Not Present 00:08:01.185 FDP Configuration: Valid 00:08:01.185 Vendor Specific Size: 0 00:08:01.185 Number of Reclaim Groups: 2 00:08:01.185 Number of Recalim Unit Handles: 8 00:08:01.185 Max Placement Identifiers: 128 00:08:01.185 Number of Namespaces Suppprted: 256 00:08:01.185 Reclaim unit Nominal Size: 6000000 bytes 00:08:01.185 Estimated Reclaim Unit Time Limit: Not Reported 00:08:01.185 RUH Desc #000: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #001: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #002: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #003: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #004: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #005: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #006: RUH Type: Initially Isolated 00:08:01.185 RUH Desc #007: RUH Type: Initially Isolated 00:08:01.185 00:08:01.185 FDP reclaim unit handle usage log page 00:08:01.185 ====================================== 00:08:01.185 Number of Reclaim Unit Handles: 8 00:08:01.185 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:01.185 RUH Usage Desc #001: RUH Attributes: Unused 00:08:01.185 RUH Usage Desc #002: RUH Attributes: Unused 00:08:01.185 RUH Usage Desc #003: RUH Attributes: Unused 00:08:01.185 RUH Usage Desc #004: RUH Attributes: Unused 00:08:01.185 RUH Usage Desc #005: RUH Attributes: Unused 00:08:01.185 RUH Usage Desc #006: RUH Attributes: Unused 00:08:01.185 RUH Usage Desc #007: RUH Attributes: Unused 00:08:01.185 00:08:01.185 FDP statistics log page 00:08:01.185 ======================= 00:08:01.185 Host bytes with metadata written: 368025600 00:08:01.185 Media bytes with metadata written: 368066560 00:08:01.185 Media bytes erased: 0 00:08:01.185 00:08:01.185 FDP events log page 00:08:01.185 =================== 00:08:01.185 Number of FDP events: 0 00:08:01.185 00:08:01.185 NVM Specific Namespace Data 00:08:01.185 =========================== 00:08:01.185 Logical Block Storage Tag Mask: 0 00:08:01.185 Protection Information Capabilities: 00:08:01.185 16b Guard Protection Information Storage Tag Support: No 00:08:01.185 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.185 Storage Tag Check Read Support: No 00:08:01.185 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.185 00:08:01.185 real 0m1.249s 00:08:01.185 user 0m0.461s 00:08:01.185 sys 0m0.566s 00:08:01.185 09:20:26 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.185 09:20:26 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:01.185 ************************************ 00:08:01.185 END TEST nvme_identify 00:08:01.185 ************************************ 00:08:01.186 09:20:26 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:01.186 09:20:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.186 09:20:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.186 09:20:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.186 ************************************ 00:08:01.186 START TEST nvme_perf 00:08:01.186 ************************************ 00:08:01.186 09:20:26 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:01.186 09:20:26 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:02.574 Initializing NVMe Controllers 00:08:02.574 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.574 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.574 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.574 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.574 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:02.574 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:02.574 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:02.574 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:02.574 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:02.574 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:02.574 Initialization complete. Launching workers. 00:08:02.574 ======================================================== 00:08:02.574 Latency(us) 00:08:02.574 Device Information : IOPS MiB/s Average min max 00:08:02.574 PCIE (0000:00:13.0) NSID 1 from core 0: 15356.67 179.96 8347.33 5788.77 30346.17 00:08:02.574 PCIE (0000:00:10.0) NSID 1 from core 0: 15356.67 179.96 8334.37 5647.07 28985.66 00:08:02.574 PCIE (0000:00:11.0) NSID 1 from core 0: 15356.67 179.96 8322.68 5776.27 27364.80 00:08:02.574 PCIE (0000:00:12.0) NSID 1 from core 0: 15356.67 179.96 8309.95 5752.98 26270.66 00:08:02.574 PCIE (0000:00:12.0) NSID 2 from core 0: 15356.67 179.96 8296.74 5792.58 24633.45 00:08:02.574 PCIE (0000:00:12.0) NSID 3 from core 0: 15356.67 179.96 8284.15 5766.95 23000.16 00:08:02.574 ======================================================== 00:08:02.574 Total : 92140.01 1079.77 8315.87 5647.07 30346.17 00:08:02.574 00:08:02.574 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:02.574 ================================================================================= 00:08:02.574 1.00000% : 5948.652us 00:08:02.574 10.00000% : 6276.332us 00:08:02.574 25.00000% : 6704.837us 00:08:02.574 50.00000% : 8065.969us 00:08:02.574 75.00000% : 9376.689us 00:08:02.574 90.00000% : 10737.822us 00:08:02.574 95.00000% : 11292.357us 00:08:02.574 98.00000% : 11846.892us 00:08:02.574 99.00000% : 13006.375us 00:08:02.574 99.50000% : 24702.031us 00:08:02.574 99.90000% : 30045.735us 00:08:02.574 99.99000% : 30449.034us 00:08:02.574 99.99900% : 30449.034us 00:08:02.574 99.99990% : 30449.034us 00:08:02.574 99.99999% : 30449.034us 00:08:02.574 00:08:02.574 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:02.574 ================================================================================= 00:08:02.574 1.00000% : 5873.034us 00:08:02.574 10.00000% : 6276.332us 00:08:02.574 25.00000% : 6755.249us 00:08:02.574 50.00000% : 8065.969us 00:08:02.574 75.00000% : 9376.689us 00:08:02.574 90.00000% : 10737.822us 00:08:02.574 95.00000% : 11241.945us 00:08:02.574 98.00000% : 11947.717us 00:08:02.574 99.00000% : 12552.665us 00:08:02.574 99.50000% : 22988.012us 00:08:02.574 99.90000% : 28634.191us 00:08:02.574 99.99000% : 29037.489us 00:08:02.574 99.99900% : 29037.489us 00:08:02.574 99.99990% : 29037.489us 00:08:02.574 99.99999% : 29037.489us 00:08:02.574 00:08:02.574 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:02.574 ================================================================================= 00:08:02.574 1.00000% : 5948.652us 00:08:02.574 10.00000% : 6301.538us 00:08:02.574 25.00000% : 6704.837us 00:08:02.574 50.00000% : 8116.382us 00:08:02.574 75.00000% : 9427.102us 00:08:02.574 90.00000% : 10687.409us 00:08:02.574 95.00000% : 11241.945us 00:08:02.574 98.00000% : 11998.129us 00:08:02.574 99.00000% : 12451.840us 00:08:02.574 99.50000% : 21374.818us 00:08:02.574 99.90000% : 27020.997us 00:08:02.574 99.99000% : 27424.295us 00:08:02.574 99.99900% : 27424.295us 00:08:02.574 99.99990% : 27424.295us 00:08:02.574 99.99999% : 27424.295us 00:08:02.574 00:08:02.574 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:02.574 ================================================================================= 00:08:02.574 1.00000% : 5948.652us 00:08:02.574 10.00000% : 6276.332us 00:08:02.574 25.00000% : 6704.837us 00:08:02.574 50.00000% : 8116.382us 00:08:02.574 75.00000% : 9376.689us 00:08:02.574 90.00000% : 10687.409us 00:08:02.574 95.00000% : 11241.945us 00:08:02.574 98.00000% : 12048.542us 00:08:02.574 99.00000% : 12552.665us 00:08:02.574 99.50000% : 20366.572us 00:08:02.574 99.90000% : 26012.751us 00:08:02.574 99.99000% : 26416.049us 00:08:02.574 99.99900% : 26416.049us 00:08:02.574 99.99990% : 26416.049us 00:08:02.574 99.99999% : 26416.049us 00:08:02.574 00:08:02.574 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:02.574 ================================================================================= 00:08:02.574 1.00000% : 5923.446us 00:08:02.574 10.00000% : 6301.538us 00:08:02.574 25.00000% : 6704.837us 00:08:02.574 50.00000% : 8116.382us 00:08:02.574 75.00000% : 9376.689us 00:08:02.574 90.00000% : 10687.409us 00:08:02.574 95.00000% : 11241.945us 00:08:02.574 98.00000% : 12048.542us 00:08:02.574 99.00000% : 12502.252us 00:08:02.574 99.50000% : 18753.378us 00:08:02.574 99.90000% : 24298.732us 00:08:02.574 99.99000% : 24702.031us 00:08:02.574 99.99900% : 24702.031us 00:08:02.574 99.99990% : 24702.031us 00:08:02.574 99.99999% : 24702.031us 00:08:02.574 00:08:02.574 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:02.574 ================================================================================= 00:08:02.574 1.00000% : 5948.652us 00:08:02.574 10.00000% : 6276.332us 00:08:02.574 25.00000% : 6704.837us 00:08:02.574 50.00000% : 8065.969us 00:08:02.574 75.00000% : 9376.689us 00:08:02.574 90.00000% : 10737.822us 00:08:02.574 95.00000% : 11241.945us 00:08:02.574 98.00000% : 12098.954us 00:08:02.574 99.00000% : 12754.314us 00:08:02.574 99.50000% : 17140.185us 00:08:02.574 99.90000% : 22685.538us 00:08:02.574 99.99000% : 22988.012us 00:08:02.574 99.99900% : 23088.837us 00:08:02.574 99.99990% : 23088.837us 00:08:02.574 99.99999% : 23088.837us 00:08:02.574 00:08:02.574 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:02.574 ============================================================================== 00:08:02.574 Range in us Cumulative IO count 00:08:02.574 5772.209 - 5797.415: 0.0195% ( 3) 00:08:02.574 5797.415 - 5822.622: 0.1042% ( 13) 00:08:02.574 5822.622 - 5847.828: 0.1693% ( 10) 00:08:02.574 5847.828 - 5873.034: 0.3060% ( 21) 00:08:02.574 5873.034 - 5898.240: 0.6185% ( 48) 00:08:02.574 5898.240 - 5923.446: 0.9245% ( 47) 00:08:02.574 5923.446 - 5948.652: 1.3346% ( 63) 00:08:02.574 5948.652 - 5973.858: 1.7513% ( 64) 00:08:02.574 5973.858 - 5999.065: 2.0898% ( 52) 00:08:02.574 5999.065 - 6024.271: 2.5651% ( 73) 00:08:02.574 6024.271 - 6049.477: 3.2617% ( 107) 00:08:02.574 6049.477 - 6074.683: 3.8737% ( 94) 00:08:02.574 6074.683 - 6099.889: 4.5768% ( 108) 00:08:02.574 6099.889 - 6125.095: 5.2799% ( 108) 00:08:02.574 6125.095 - 6150.302: 6.0677% ( 121) 00:08:02.574 6150.302 - 6175.508: 6.8164% ( 115) 00:08:02.574 6175.508 - 6200.714: 7.5456% ( 112) 00:08:02.574 6200.714 - 6225.920: 8.3333% ( 121) 00:08:02.574 6225.920 - 6251.126: 9.1211% ( 121) 00:08:02.574 6251.126 - 6276.332: 10.0391% ( 141) 00:08:02.574 6276.332 - 6301.538: 10.8789% ( 129) 00:08:02.574 6301.538 - 6326.745: 11.6992% ( 126) 00:08:02.574 6326.745 - 6351.951: 12.5391% ( 129) 00:08:02.574 6351.951 - 6377.157: 13.4375% ( 138) 00:08:02.574 6377.157 - 6402.363: 14.3229% ( 136) 00:08:02.574 6402.363 - 6427.569: 15.2214% ( 138) 00:08:02.574 6427.569 - 6452.775: 16.1523% ( 143) 00:08:02.574 6452.775 - 6503.188: 18.1315% ( 304) 00:08:02.574 6503.188 - 6553.600: 20.2279% ( 322) 00:08:02.574 6553.600 - 6604.012: 22.1549% ( 296) 00:08:02.574 6604.012 - 6654.425: 24.1797% ( 311) 00:08:02.574 6654.425 - 6704.837: 25.9245% ( 268) 00:08:02.574 6704.837 - 6755.249: 27.5846% ( 255) 00:08:02.574 6755.249 - 6805.662: 29.1797% ( 245) 00:08:02.574 6805.662 - 6856.074: 30.4818% ( 200) 00:08:02.574 6856.074 - 6906.486: 31.5755% ( 168) 00:08:02.574 6906.486 - 6956.898: 32.6042% ( 158) 00:08:02.574 6956.898 - 7007.311: 33.5091% ( 139) 00:08:02.574 7007.311 - 7057.723: 34.2839% ( 119) 00:08:02.574 7057.723 - 7108.135: 34.9154% ( 97) 00:08:02.574 7108.135 - 7158.548: 35.5404% ( 96) 00:08:02.574 7158.548 - 7208.960: 36.1849% ( 99) 00:08:02.574 7208.960 - 7259.372: 36.8620% ( 104) 00:08:02.574 7259.372 - 7309.785: 37.3568% ( 76) 00:08:02.574 7309.785 - 7360.197: 37.8451% ( 75) 00:08:02.574 7360.197 - 7410.609: 38.3919% ( 84) 00:08:02.574 7410.609 - 7461.022: 38.8281% ( 67) 00:08:02.574 7461.022 - 7511.434: 39.3945% ( 87) 00:08:02.574 7511.434 - 7561.846: 40.1302% ( 113) 00:08:02.574 7561.846 - 7612.258: 40.9570% ( 127) 00:08:02.574 7612.258 - 7662.671: 41.8034% ( 130) 00:08:02.574 7662.671 - 7713.083: 42.8776% ( 165) 00:08:02.574 7713.083 - 7763.495: 43.7891% ( 140) 00:08:02.574 7763.495 - 7813.908: 44.8177% ( 158) 00:08:02.574 7813.908 - 7864.320: 45.8464% ( 158) 00:08:02.574 7864.320 - 7914.732: 46.8815% ( 159) 00:08:02.574 7914.732 - 7965.145: 47.9688% ( 167) 00:08:02.574 7965.145 - 8015.557: 49.0690% ( 169) 00:08:02.574 8015.557 - 8065.969: 50.0977% ( 158) 00:08:02.574 8065.969 - 8116.382: 51.1003% ( 154) 00:08:02.574 8116.382 - 8166.794: 52.1289% ( 158) 00:08:02.574 8166.794 - 8217.206: 53.2617% ( 174) 00:08:02.574 8217.206 - 8267.618: 54.4336% ( 180) 00:08:02.574 8267.618 - 8318.031: 55.6315% ( 184) 00:08:02.574 8318.031 - 8368.443: 56.8164% ( 182) 00:08:02.574 8368.443 - 8418.855: 58.1250% ( 201) 00:08:02.574 8418.855 - 8469.268: 59.4141% ( 198) 00:08:02.574 8469.268 - 8519.680: 60.7487% ( 205) 00:08:02.574 8519.680 - 8570.092: 62.1289% ( 212) 00:08:02.574 8570.092 - 8620.505: 63.5026% ( 211) 00:08:02.574 8620.505 - 8670.917: 64.7917% ( 198) 00:08:02.574 8670.917 - 8721.329: 66.0026% ( 186) 00:08:02.574 8721.329 - 8771.742: 67.1159% ( 171) 00:08:02.574 8771.742 - 8822.154: 68.0469% ( 143) 00:08:02.574 8822.154 - 8872.566: 68.7956% ( 115) 00:08:02.574 8872.566 - 8922.978: 69.5508% ( 116) 00:08:02.574 8922.978 - 8973.391: 70.2604% ( 109) 00:08:02.574 8973.391 - 9023.803: 70.9635% ( 108) 00:08:02.574 9023.803 - 9074.215: 71.6602% ( 107) 00:08:02.575 9074.215 - 9124.628: 72.3242% ( 102) 00:08:02.575 9124.628 - 9175.040: 72.9492% ( 96) 00:08:02.575 9175.040 - 9225.452: 73.5221% ( 88) 00:08:02.575 9225.452 - 9275.865: 74.1081% ( 90) 00:08:02.575 9275.865 - 9326.277: 74.6224% ( 79) 00:08:02.575 9326.277 - 9376.689: 75.1367% ( 79) 00:08:02.575 9376.689 - 9427.102: 75.6445% ( 78) 00:08:02.575 9427.102 - 9477.514: 76.1068% ( 71) 00:08:02.575 9477.514 - 9527.926: 76.4714% ( 56) 00:08:02.575 9527.926 - 9578.338: 76.8490% ( 58) 00:08:02.575 9578.338 - 9628.751: 77.2656% ( 64) 00:08:02.575 9628.751 - 9679.163: 77.7409% ( 73) 00:08:02.575 9679.163 - 9729.575: 78.2812% ( 83) 00:08:02.575 9729.575 - 9779.988: 78.7695% ( 75) 00:08:02.575 9779.988 - 9830.400: 79.2253% ( 70) 00:08:02.575 9830.400 - 9880.812: 79.7005% ( 73) 00:08:02.575 9880.812 - 9931.225: 80.1888% ( 75) 00:08:02.575 9931.225 - 9981.637: 80.7552% ( 87) 00:08:02.575 9981.637 - 10032.049: 81.3021% ( 84) 00:08:02.575 10032.049 - 10082.462: 81.9076% ( 93) 00:08:02.575 10082.462 - 10132.874: 82.5195% ( 94) 00:08:02.575 10132.874 - 10183.286: 83.2422% ( 111) 00:08:02.575 10183.286 - 10233.698: 83.8216% ( 89) 00:08:02.575 10233.698 - 10284.111: 84.3880% ( 87) 00:08:02.575 10284.111 - 10334.523: 84.9740% ( 90) 00:08:02.575 10334.523 - 10384.935: 85.5599% ( 90) 00:08:02.575 10384.935 - 10435.348: 86.2305% ( 103) 00:08:02.575 10435.348 - 10485.760: 86.8880% ( 101) 00:08:02.575 10485.760 - 10536.172: 87.5195% ( 97) 00:08:02.575 10536.172 - 10586.585: 88.1836% ( 102) 00:08:02.575 10586.585 - 10636.997: 88.8281% ( 99) 00:08:02.575 10636.997 - 10687.409: 89.5508% ( 111) 00:08:02.575 10687.409 - 10737.822: 90.1953% ( 99) 00:08:02.575 10737.822 - 10788.234: 90.8529% ( 101) 00:08:02.575 10788.234 - 10838.646: 91.3997% ( 84) 00:08:02.575 10838.646 - 10889.058: 91.8685% ( 72) 00:08:02.575 10889.058 - 10939.471: 92.3893% ( 80) 00:08:02.575 10939.471 - 10989.883: 92.8125% ( 65) 00:08:02.575 10989.883 - 11040.295: 93.1771% ( 56) 00:08:02.575 11040.295 - 11090.708: 93.6393% ( 71) 00:08:02.575 11090.708 - 11141.120: 94.1081% ( 72) 00:08:02.575 11141.120 - 11191.532: 94.5182% ( 63) 00:08:02.575 11191.532 - 11241.945: 94.9219% ( 62) 00:08:02.575 11241.945 - 11292.357: 95.3581% ( 67) 00:08:02.575 11292.357 - 11342.769: 95.6836% ( 50) 00:08:02.575 11342.769 - 11393.182: 95.9961% ( 48) 00:08:02.575 11393.182 - 11443.594: 96.2891% ( 45) 00:08:02.575 11443.594 - 11494.006: 96.5951% ( 47) 00:08:02.575 11494.006 - 11544.418: 96.9076% ( 48) 00:08:02.575 11544.418 - 11594.831: 97.1680% ( 40) 00:08:02.575 11594.831 - 11645.243: 97.3958% ( 35) 00:08:02.575 11645.243 - 11695.655: 97.5651% ( 26) 00:08:02.575 11695.655 - 11746.068: 97.7279% ( 25) 00:08:02.575 11746.068 - 11796.480: 97.8906% ( 25) 00:08:02.575 11796.480 - 11846.892: 98.0534% ( 25) 00:08:02.575 11846.892 - 11897.305: 98.1966% ( 22) 00:08:02.575 11897.305 - 11947.717: 98.3268% ( 20) 00:08:02.575 11947.717 - 11998.129: 98.4180% ( 14) 00:08:02.575 11998.129 - 12048.542: 98.4831% ( 10) 00:08:02.575 12048.542 - 12098.954: 98.5612% ( 12) 00:08:02.575 12098.954 - 12149.366: 98.6068% ( 7) 00:08:02.575 12149.366 - 12199.778: 98.6654% ( 9) 00:08:02.575 12199.778 - 12250.191: 98.7240% ( 9) 00:08:02.575 12250.191 - 12300.603: 98.7630% ( 6) 00:08:02.575 12300.603 - 12351.015: 98.8021% ( 6) 00:08:02.575 12351.015 - 12401.428: 98.8216% ( 3) 00:08:02.575 12401.428 - 12451.840: 98.8477% ( 4) 00:08:02.575 12451.840 - 12502.252: 98.8737% ( 4) 00:08:02.575 12502.252 - 12552.665: 98.9128% ( 6) 00:08:02.575 12552.665 - 12603.077: 98.9258% ( 2) 00:08:02.575 12603.077 - 12653.489: 98.9323% ( 1) 00:08:02.575 12653.489 - 12703.902: 98.9453% ( 2) 00:08:02.575 12703.902 - 12754.314: 98.9518% ( 1) 00:08:02.575 12754.314 - 12804.726: 98.9648% ( 2) 00:08:02.575 12804.726 - 12855.138: 98.9714% ( 1) 00:08:02.575 12855.138 - 12905.551: 98.9844% ( 2) 00:08:02.575 12905.551 - 13006.375: 99.0104% ( 4) 00:08:02.575 13006.375 - 13107.200: 99.0299% ( 3) 00:08:02.575 13107.200 - 13208.025: 99.0495% ( 3) 00:08:02.575 13208.025 - 13308.849: 99.0755% ( 4) 00:08:02.575 13308.849 - 13409.674: 99.1016% ( 4) 00:08:02.575 13409.674 - 13510.498: 99.1341% ( 5) 00:08:02.575 13510.498 - 13611.323: 99.1602% ( 4) 00:08:02.575 13611.323 - 13712.148: 99.1667% ( 1) 00:08:02.575 23391.311 - 23492.135: 99.1797% ( 2) 00:08:02.575 23492.135 - 23592.960: 99.2122% ( 5) 00:08:02.575 23592.960 - 23693.785: 99.2383% ( 4) 00:08:02.575 23693.785 - 23794.609: 99.2643% ( 4) 00:08:02.575 23794.609 - 23895.434: 99.2904% ( 4) 00:08:02.575 23895.434 - 23996.258: 99.3229% ( 5) 00:08:02.575 23996.258 - 24097.083: 99.3490% ( 4) 00:08:02.575 24097.083 - 24197.908: 99.3750% ( 4) 00:08:02.575 24197.908 - 24298.732: 99.4076% ( 5) 00:08:02.575 24298.732 - 24399.557: 99.4336% ( 4) 00:08:02.575 24399.557 - 24500.382: 99.4661% ( 5) 00:08:02.575 24500.382 - 24601.206: 99.4922% ( 4) 00:08:02.575 24601.206 - 24702.031: 99.5182% ( 4) 00:08:02.575 24702.031 - 24802.855: 99.5508% ( 5) 00:08:02.575 24802.855 - 24903.680: 99.5768% ( 4) 00:08:02.575 24903.680 - 25004.505: 99.5833% ( 1) 00:08:02.575 28634.191 - 28835.840: 99.6159% ( 5) 00:08:02.575 28835.840 - 29037.489: 99.6680% ( 8) 00:08:02.575 29037.489 - 29239.138: 99.7201% ( 8) 00:08:02.575 29239.138 - 29440.788: 99.7721% ( 8) 00:08:02.575 29440.788 - 29642.437: 99.8242% ( 8) 00:08:02.575 29642.437 - 29844.086: 99.8698% ( 7) 00:08:02.575 29844.086 - 30045.735: 99.9284% ( 9) 00:08:02.575 30045.735 - 30247.385: 99.9740% ( 7) 00:08:02.575 30247.385 - 30449.034: 100.0000% ( 4) 00:08:02.575 00:08:02.575 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:02.575 ============================================================================== 00:08:02.575 Range in us Cumulative IO count 00:08:02.575 5646.178 - 5671.385: 0.0260% ( 4) 00:08:02.575 5671.385 - 5696.591: 0.0326% ( 1) 00:08:02.575 5696.591 - 5721.797: 0.0911% ( 9) 00:08:02.575 5721.797 - 5747.003: 0.1758% ( 13) 00:08:02.575 5747.003 - 5772.209: 0.2865% ( 17) 00:08:02.575 5772.209 - 5797.415: 0.4818% ( 30) 00:08:02.575 5797.415 - 5822.622: 0.7031% ( 34) 00:08:02.575 5822.622 - 5847.828: 0.9180% ( 33) 00:08:02.575 5847.828 - 5873.034: 1.1589% ( 37) 00:08:02.575 5873.034 - 5898.240: 1.4714% ( 48) 00:08:02.575 5898.240 - 5923.446: 1.8945% ( 65) 00:08:02.575 5923.446 - 5948.652: 2.3568% ( 71) 00:08:02.575 5948.652 - 5973.858: 2.9232% ( 87) 00:08:02.575 5973.858 - 5999.065: 3.4701% ( 84) 00:08:02.575 5999.065 - 6024.271: 3.9453% ( 73) 00:08:02.575 6024.271 - 6049.477: 4.5703% ( 96) 00:08:02.575 6049.477 - 6074.683: 5.1432% ( 88) 00:08:02.575 6074.683 - 6099.889: 5.7227% ( 89) 00:08:02.575 6099.889 - 6125.095: 6.3997% ( 104) 00:08:02.575 6125.095 - 6150.302: 7.0443% ( 99) 00:08:02.575 6150.302 - 6175.508: 7.7930% ( 115) 00:08:02.575 6175.508 - 6200.714: 8.5352% ( 114) 00:08:02.575 6200.714 - 6225.920: 9.3099% ( 119) 00:08:02.575 6225.920 - 6251.126: 9.9219% ( 94) 00:08:02.575 6251.126 - 6276.332: 10.7487% ( 127) 00:08:02.575 6276.332 - 6301.538: 11.4323% ( 105) 00:08:02.575 6301.538 - 6326.745: 12.1680% ( 113) 00:08:02.575 6326.745 - 6351.951: 12.9492% ( 120) 00:08:02.575 6351.951 - 6377.157: 13.8281% ( 135) 00:08:02.575 6377.157 - 6402.363: 14.5312% ( 108) 00:08:02.575 6402.363 - 6427.569: 15.4688% ( 144) 00:08:02.575 6427.569 - 6452.775: 16.2240% ( 116) 00:08:02.575 6452.775 - 6503.188: 17.9818% ( 270) 00:08:02.575 6503.188 - 6553.600: 19.6875% ( 262) 00:08:02.575 6553.600 - 6604.012: 21.5104% ( 280) 00:08:02.575 6604.012 - 6654.425: 23.2031% ( 260) 00:08:02.575 6654.425 - 6704.837: 24.9284% ( 265) 00:08:02.575 6704.837 - 6755.249: 26.5234% ( 245) 00:08:02.575 6755.249 - 6805.662: 28.0990% ( 242) 00:08:02.575 6805.662 - 6856.074: 29.5182% ( 218) 00:08:02.575 6856.074 - 6906.486: 30.8659% ( 207) 00:08:02.575 6906.486 - 6956.898: 31.9727% ( 170) 00:08:02.575 6956.898 - 7007.311: 32.8906% ( 141) 00:08:02.575 7007.311 - 7057.723: 33.8086% ( 141) 00:08:02.575 7057.723 - 7108.135: 34.5964% ( 121) 00:08:02.575 7108.135 - 7158.548: 35.2344% ( 98) 00:08:02.575 7158.548 - 7208.960: 35.8333% ( 92) 00:08:02.575 7208.960 - 7259.372: 36.3607% ( 81) 00:08:02.575 7259.372 - 7309.785: 36.9987% ( 98) 00:08:02.575 7309.785 - 7360.197: 37.6237% ( 96) 00:08:02.575 7360.197 - 7410.609: 38.2031% ( 89) 00:08:02.575 7410.609 - 7461.022: 38.9258% ( 111) 00:08:02.575 7461.022 - 7511.434: 39.7917% ( 133) 00:08:02.575 7511.434 - 7561.846: 40.6576% ( 133) 00:08:02.575 7561.846 - 7612.258: 41.5495% ( 137) 00:08:02.575 7612.258 - 7662.671: 42.5781% ( 158) 00:08:02.575 7662.671 - 7713.083: 43.5156% ( 144) 00:08:02.575 7713.083 - 7763.495: 44.3685% ( 131) 00:08:02.575 7763.495 - 7813.908: 45.3060% ( 144) 00:08:02.575 7813.908 - 7864.320: 46.3216% ( 156) 00:08:02.575 7864.320 - 7914.732: 47.1615% ( 129) 00:08:02.575 7914.732 - 7965.145: 48.1315% ( 149) 00:08:02.575 7965.145 - 8015.557: 49.0885% ( 147) 00:08:02.575 8015.557 - 8065.969: 50.1628% ( 165) 00:08:02.575 8065.969 - 8116.382: 51.0872% ( 142) 00:08:02.575 8116.382 - 8166.794: 52.1549% ( 164) 00:08:02.575 8166.794 - 8217.206: 53.2292% ( 165) 00:08:02.575 8217.206 - 8267.618: 54.3229% ( 168) 00:08:02.575 8267.618 - 8318.031: 55.3516% ( 158) 00:08:02.575 8318.031 - 8368.443: 56.4779% ( 173) 00:08:02.575 8368.443 - 8418.855: 57.7018% ( 188) 00:08:02.575 8418.855 - 8469.268: 58.8932% ( 183) 00:08:02.576 8469.268 - 8519.680: 60.0846% ( 183) 00:08:02.576 8519.680 - 8570.092: 61.2305% ( 176) 00:08:02.576 8570.092 - 8620.505: 62.4805% ( 192) 00:08:02.576 8620.505 - 8670.917: 63.6979% ( 187) 00:08:02.576 8670.917 - 8721.329: 64.8372% ( 175) 00:08:02.576 8721.329 - 8771.742: 65.9375% ( 169) 00:08:02.576 8771.742 - 8822.154: 66.8815% ( 145) 00:08:02.576 8822.154 - 8872.566: 67.8320% ( 146) 00:08:02.576 8872.566 - 8922.978: 68.7044% ( 134) 00:08:02.576 8922.978 - 8973.391: 69.5182% ( 125) 00:08:02.576 8973.391 - 9023.803: 70.3125% ( 122) 00:08:02.576 9023.803 - 9074.215: 71.0482% ( 113) 00:08:02.576 9074.215 - 9124.628: 71.7318% ( 105) 00:08:02.576 9124.628 - 9175.040: 72.4414% ( 109) 00:08:02.576 9175.040 - 9225.452: 73.1771% ( 113) 00:08:02.576 9225.452 - 9275.865: 73.8411% ( 102) 00:08:02.576 9275.865 - 9326.277: 74.5052% ( 102) 00:08:02.576 9326.277 - 9376.689: 75.1237% ( 95) 00:08:02.576 9376.689 - 9427.102: 75.5469% ( 65) 00:08:02.576 9427.102 - 9477.514: 76.1458% ( 92) 00:08:02.576 9477.514 - 9527.926: 76.6081% ( 71) 00:08:02.576 9527.926 - 9578.338: 77.2396% ( 97) 00:08:02.576 9578.338 - 9628.751: 77.6237% ( 59) 00:08:02.576 9628.751 - 9679.163: 78.0273% ( 62) 00:08:02.576 9679.163 - 9729.575: 78.5352% ( 78) 00:08:02.576 9729.575 - 9779.988: 79.0690% ( 82) 00:08:02.576 9779.988 - 9830.400: 79.5573% ( 75) 00:08:02.576 9830.400 - 9880.812: 80.0651% ( 78) 00:08:02.576 9880.812 - 9931.225: 80.6250% ( 86) 00:08:02.576 9931.225 - 9981.637: 81.2435% ( 95) 00:08:02.576 9981.637 - 10032.049: 81.7969% ( 85) 00:08:02.576 10032.049 - 10082.462: 82.4544% ( 101) 00:08:02.576 10082.462 - 10132.874: 83.0404% ( 90) 00:08:02.576 10132.874 - 10183.286: 83.5938% ( 85) 00:08:02.576 10183.286 - 10233.698: 84.1667% ( 88) 00:08:02.576 10233.698 - 10284.111: 84.7135% ( 84) 00:08:02.576 10284.111 - 10334.523: 85.2669% ( 85) 00:08:02.576 10334.523 - 10384.935: 85.9115% ( 99) 00:08:02.576 10384.935 - 10435.348: 86.4388% ( 81) 00:08:02.576 10435.348 - 10485.760: 87.0898% ( 100) 00:08:02.576 10485.760 - 10536.172: 87.7083% ( 95) 00:08:02.576 10536.172 - 10586.585: 88.3333% ( 96) 00:08:02.576 10586.585 - 10636.997: 88.9779% ( 99) 00:08:02.576 10636.997 - 10687.409: 89.5573% ( 89) 00:08:02.576 10687.409 - 10737.822: 90.2669% ( 109) 00:08:02.576 10737.822 - 10788.234: 90.9766% ( 109) 00:08:02.576 10788.234 - 10838.646: 91.6016% ( 96) 00:08:02.576 10838.646 - 10889.058: 92.1094% ( 78) 00:08:02.576 10889.058 - 10939.471: 92.6693% ( 86) 00:08:02.576 10939.471 - 10989.883: 93.1315% ( 71) 00:08:02.576 10989.883 - 11040.295: 93.5221% ( 60) 00:08:02.576 11040.295 - 11090.708: 93.9388% ( 64) 00:08:02.576 11090.708 - 11141.120: 94.2773% ( 52) 00:08:02.576 11141.120 - 11191.532: 94.6680% ( 60) 00:08:02.576 11191.532 - 11241.945: 95.0000% ( 51) 00:08:02.576 11241.945 - 11292.357: 95.3255% ( 50) 00:08:02.576 11292.357 - 11342.769: 95.6901% ( 56) 00:08:02.576 11342.769 - 11393.182: 95.9310% ( 37) 00:08:02.576 11393.182 - 11443.594: 96.1979% ( 41) 00:08:02.576 11443.594 - 11494.006: 96.5169% ( 49) 00:08:02.576 11494.006 - 11544.418: 96.7708% ( 39) 00:08:02.576 11544.418 - 11594.831: 97.0117% ( 37) 00:08:02.576 11594.831 - 11645.243: 97.2396% ( 35) 00:08:02.576 11645.243 - 11695.655: 97.3958% ( 24) 00:08:02.576 11695.655 - 11746.068: 97.5586% ( 25) 00:08:02.576 11746.068 - 11796.480: 97.6953% ( 21) 00:08:02.576 11796.480 - 11846.892: 97.7995% ( 16) 00:08:02.576 11846.892 - 11897.305: 97.9167% ( 18) 00:08:02.576 11897.305 - 11947.717: 98.0273% ( 17) 00:08:02.576 11947.717 - 11998.129: 98.1250% ( 15) 00:08:02.576 11998.129 - 12048.542: 98.2422% ( 18) 00:08:02.576 12048.542 - 12098.954: 98.3268% ( 13) 00:08:02.576 12098.954 - 12149.366: 98.4245% ( 15) 00:08:02.576 12149.366 - 12199.778: 98.4961% ( 11) 00:08:02.576 12199.778 - 12250.191: 98.6849% ( 29) 00:08:02.576 12250.191 - 12300.603: 98.7435% ( 9) 00:08:02.576 12300.603 - 12351.015: 98.8021% ( 9) 00:08:02.576 12351.015 - 12401.428: 98.8607% ( 9) 00:08:02.576 12401.428 - 12451.840: 98.9128% ( 8) 00:08:02.576 12451.840 - 12502.252: 98.9648% ( 8) 00:08:02.576 12502.252 - 12552.665: 99.0104% ( 7) 00:08:02.576 12552.665 - 12603.077: 99.0495% ( 6) 00:08:02.576 12603.077 - 12653.489: 99.0625% ( 2) 00:08:02.576 12653.489 - 12703.902: 99.0690% ( 1) 00:08:02.576 12703.902 - 12754.314: 99.0755% ( 1) 00:08:02.576 12754.314 - 12804.726: 99.0951% ( 3) 00:08:02.576 12804.726 - 12855.138: 99.1016% ( 1) 00:08:02.576 12855.138 - 12905.551: 99.1146% ( 2) 00:08:02.576 12905.551 - 13006.375: 99.1341% ( 3) 00:08:02.576 13006.375 - 13107.200: 99.1602% ( 4) 00:08:02.576 13107.200 - 13208.025: 99.1667% ( 1) 00:08:02.576 21576.468 - 21677.292: 99.1862% ( 3) 00:08:02.576 21677.292 - 21778.117: 99.2057% ( 3) 00:08:02.576 21778.117 - 21878.942: 99.2253% ( 3) 00:08:02.576 21878.942 - 21979.766: 99.2578% ( 5) 00:08:02.576 21979.766 - 22080.591: 99.2773% ( 3) 00:08:02.576 22080.591 - 22181.415: 99.3034% ( 4) 00:08:02.576 22181.415 - 22282.240: 99.3294% ( 4) 00:08:02.576 22282.240 - 22383.065: 99.3490% ( 3) 00:08:02.576 22383.065 - 22483.889: 99.3815% ( 5) 00:08:02.576 22483.889 - 22584.714: 99.4076% ( 4) 00:08:02.576 22584.714 - 22685.538: 99.4336% ( 4) 00:08:02.576 22685.538 - 22786.363: 99.4531% ( 3) 00:08:02.576 22786.363 - 22887.188: 99.4857% ( 5) 00:08:02.576 22887.188 - 22988.012: 99.5052% ( 3) 00:08:02.576 22988.012 - 23088.837: 99.5378% ( 5) 00:08:02.576 23088.837 - 23189.662: 99.5573% ( 3) 00:08:02.576 23189.662 - 23290.486: 99.5833% ( 4) 00:08:02.576 27020.997 - 27222.646: 99.5964% ( 2) 00:08:02.576 27222.646 - 27424.295: 99.6354% ( 6) 00:08:02.576 27424.295 - 27625.945: 99.6875% ( 8) 00:08:02.576 27625.945 - 27827.594: 99.7461% ( 9) 00:08:02.576 27827.594 - 28029.243: 99.7721% ( 4) 00:08:02.576 28029.243 - 28230.892: 99.8242% ( 8) 00:08:02.576 28230.892 - 28432.542: 99.8698% ( 7) 00:08:02.576 28432.542 - 28634.191: 99.9154% ( 7) 00:08:02.576 28634.191 - 28835.840: 99.9609% ( 7) 00:08:02.576 28835.840 - 29037.489: 100.0000% ( 6) 00:08:02.576 00:08:02.576 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:02.576 ============================================================================== 00:08:02.576 Range in us Cumulative IO count 00:08:02.576 5772.209 - 5797.415: 0.1172% ( 18) 00:08:02.576 5797.415 - 5822.622: 0.1628% ( 7) 00:08:02.576 5822.622 - 5847.828: 0.2409% ( 12) 00:08:02.576 5847.828 - 5873.034: 0.4167% ( 27) 00:08:02.576 5873.034 - 5898.240: 0.5924% ( 27) 00:08:02.576 5898.240 - 5923.446: 0.8594% ( 41) 00:08:02.576 5923.446 - 5948.652: 1.2565% ( 61) 00:08:02.576 5948.652 - 5973.858: 1.6536% ( 61) 00:08:02.576 5973.858 - 5999.065: 1.9922% ( 52) 00:08:02.576 5999.065 - 6024.271: 2.4414% ( 69) 00:08:02.576 6024.271 - 6049.477: 2.9167% ( 73) 00:08:02.576 6049.477 - 6074.683: 3.5417% ( 96) 00:08:02.576 6074.683 - 6099.889: 4.4531% ( 140) 00:08:02.576 6099.889 - 6125.095: 5.3581% ( 139) 00:08:02.576 6125.095 - 6150.302: 6.1198% ( 117) 00:08:02.576 6150.302 - 6175.508: 6.7839% ( 102) 00:08:02.576 6175.508 - 6200.714: 7.4674% ( 105) 00:08:02.576 6200.714 - 6225.920: 8.1445% ( 104) 00:08:02.576 6225.920 - 6251.126: 8.9258% ( 120) 00:08:02.576 6251.126 - 6276.332: 9.7786% ( 131) 00:08:02.576 6276.332 - 6301.538: 10.6836% ( 139) 00:08:02.576 6301.538 - 6326.745: 11.6341% ( 146) 00:08:02.576 6326.745 - 6351.951: 12.6628% ( 158) 00:08:02.576 6351.951 - 6377.157: 13.6523% ( 152) 00:08:02.576 6377.157 - 6402.363: 14.5117% ( 132) 00:08:02.576 6402.363 - 6427.569: 15.3776% ( 133) 00:08:02.576 6427.569 - 6452.775: 16.2760% ( 138) 00:08:02.576 6452.775 - 6503.188: 18.4049% ( 327) 00:08:02.576 6503.188 - 6553.600: 20.4297% ( 311) 00:08:02.576 6553.600 - 6604.012: 22.2852% ( 285) 00:08:02.576 6604.012 - 6654.425: 24.1536% ( 287) 00:08:02.576 6654.425 - 6704.837: 25.9831% ( 281) 00:08:02.576 6704.837 - 6755.249: 27.7669% ( 274) 00:08:02.576 6755.249 - 6805.662: 29.3555% ( 244) 00:08:02.576 6805.662 - 6856.074: 30.5794% ( 188) 00:08:02.576 6856.074 - 6906.486: 31.6927% ( 171) 00:08:02.576 6906.486 - 6956.898: 32.7018% ( 155) 00:08:02.576 6956.898 - 7007.311: 33.6133% ( 140) 00:08:02.576 7007.311 - 7057.723: 34.3750% ( 117) 00:08:02.576 7057.723 - 7108.135: 35.0000% ( 96) 00:08:02.576 7108.135 - 7158.548: 35.5794% ( 89) 00:08:02.576 7158.548 - 7208.960: 36.1458% ( 87) 00:08:02.576 7208.960 - 7259.372: 36.6602% ( 79) 00:08:02.576 7259.372 - 7309.785: 37.1159% ( 70) 00:08:02.576 7309.785 - 7360.197: 37.6172% ( 77) 00:08:02.576 7360.197 - 7410.609: 38.0990% ( 74) 00:08:02.576 7410.609 - 7461.022: 38.6068% ( 78) 00:08:02.576 7461.022 - 7511.434: 39.0560% ( 69) 00:08:02.576 7511.434 - 7561.846: 39.6484% ( 91) 00:08:02.576 7561.846 - 7612.258: 40.4557% ( 124) 00:08:02.576 7612.258 - 7662.671: 41.2826% ( 127) 00:08:02.576 7662.671 - 7713.083: 42.1680% ( 136) 00:08:02.576 7713.083 - 7763.495: 43.2422% ( 165) 00:08:02.576 7763.495 - 7813.908: 44.3880% ( 176) 00:08:02.576 7813.908 - 7864.320: 45.4167% ( 158) 00:08:02.576 7864.320 - 7914.732: 46.4714% ( 162) 00:08:02.576 7914.732 - 7965.145: 47.4609% ( 152) 00:08:02.576 7965.145 - 8015.557: 48.4115% ( 146) 00:08:02.576 8015.557 - 8065.969: 49.4531% ( 160) 00:08:02.576 8065.969 - 8116.382: 50.6185% ( 179) 00:08:02.576 8116.382 - 8166.794: 51.7253% ( 170) 00:08:02.577 8166.794 - 8217.206: 52.9102% ( 182) 00:08:02.577 8217.206 - 8267.618: 54.1016% ( 183) 00:08:02.577 8267.618 - 8318.031: 55.4036% ( 200) 00:08:02.577 8318.031 - 8368.443: 56.6667% ( 194) 00:08:02.577 8368.443 - 8418.855: 57.8581% ( 183) 00:08:02.577 8418.855 - 8469.268: 59.1146% ( 193) 00:08:02.577 8469.268 - 8519.680: 60.4167% ( 200) 00:08:02.577 8519.680 - 8570.092: 61.8424% ( 219) 00:08:02.577 8570.092 - 8620.505: 63.1120% ( 195) 00:08:02.577 8620.505 - 8670.917: 64.3945% ( 197) 00:08:02.577 8670.917 - 8721.329: 65.5404% ( 176) 00:08:02.577 8721.329 - 8771.742: 66.5039% ( 148) 00:08:02.577 8771.742 - 8822.154: 67.3633% ( 132) 00:08:02.577 8822.154 - 8872.566: 68.2552% ( 137) 00:08:02.577 8872.566 - 8922.978: 69.0885% ( 128) 00:08:02.577 8922.978 - 8973.391: 69.8112% ( 111) 00:08:02.577 8973.391 - 9023.803: 70.5078% ( 107) 00:08:02.577 9023.803 - 9074.215: 71.1393% ( 97) 00:08:02.577 9074.215 - 9124.628: 71.8490% ( 109) 00:08:02.577 9124.628 - 9175.040: 72.4870% ( 98) 00:08:02.577 9175.040 - 9225.452: 73.1380% ( 100) 00:08:02.577 9225.452 - 9275.865: 73.7435% ( 93) 00:08:02.577 9275.865 - 9326.277: 74.3685% ( 96) 00:08:02.577 9326.277 - 9376.689: 74.9674% ( 92) 00:08:02.577 9376.689 - 9427.102: 75.5404% ( 88) 00:08:02.577 9427.102 - 9477.514: 76.0938% ( 85) 00:08:02.577 9477.514 - 9527.926: 76.6276% ( 82) 00:08:02.577 9527.926 - 9578.338: 77.2135% ( 90) 00:08:02.577 9578.338 - 9628.751: 77.7148% ( 77) 00:08:02.577 9628.751 - 9679.163: 78.1315% ( 64) 00:08:02.577 9679.163 - 9729.575: 78.6328% ( 77) 00:08:02.577 9729.575 - 9779.988: 79.0820% ( 69) 00:08:02.577 9779.988 - 9830.400: 79.5443% ( 71) 00:08:02.577 9830.400 - 9880.812: 79.9870% ( 68) 00:08:02.577 9880.812 - 9931.225: 80.4297% ( 68) 00:08:02.577 9931.225 - 9981.637: 80.9635% ( 82) 00:08:02.577 9981.637 - 10032.049: 81.5169% ( 85) 00:08:02.577 10032.049 - 10082.462: 82.1289% ( 94) 00:08:02.577 10082.462 - 10132.874: 82.7865% ( 101) 00:08:02.577 10132.874 - 10183.286: 83.4831% ( 107) 00:08:02.577 10183.286 - 10233.698: 84.1667% ( 105) 00:08:02.577 10233.698 - 10284.111: 84.8372% ( 103) 00:08:02.577 10284.111 - 10334.523: 85.5078% ( 103) 00:08:02.577 10334.523 - 10384.935: 86.1589% ( 100) 00:08:02.577 10384.935 - 10435.348: 86.7773% ( 95) 00:08:02.577 10435.348 - 10485.760: 87.3893% ( 94) 00:08:02.577 10485.760 - 10536.172: 88.0664% ( 104) 00:08:02.577 10536.172 - 10586.585: 88.7565% ( 106) 00:08:02.577 10586.585 - 10636.997: 89.4531% ( 107) 00:08:02.577 10636.997 - 10687.409: 90.1302% ( 104) 00:08:02.577 10687.409 - 10737.822: 90.7357% ( 93) 00:08:02.577 10737.822 - 10788.234: 91.3802% ( 99) 00:08:02.577 10788.234 - 10838.646: 91.9596% ( 89) 00:08:02.577 10838.646 - 10889.058: 92.5195% ( 86) 00:08:02.577 10889.058 - 10939.471: 92.9688% ( 69) 00:08:02.577 10939.471 - 10989.883: 93.3919% ( 65) 00:08:02.577 10989.883 - 11040.295: 93.7174% ( 50) 00:08:02.577 11040.295 - 11090.708: 94.0820% ( 56) 00:08:02.577 11090.708 - 11141.120: 94.4206% ( 52) 00:08:02.577 11141.120 - 11191.532: 94.7396% ( 49) 00:08:02.577 11191.532 - 11241.945: 95.0521% ( 48) 00:08:02.577 11241.945 - 11292.357: 95.3646% ( 48) 00:08:02.577 11292.357 - 11342.769: 95.7096% ( 53) 00:08:02.577 11342.769 - 11393.182: 95.9896% ( 43) 00:08:02.577 11393.182 - 11443.594: 96.2370% ( 38) 00:08:02.577 11443.594 - 11494.006: 96.4193% ( 28) 00:08:02.577 11494.006 - 11544.418: 96.5820% ( 25) 00:08:02.577 11544.418 - 11594.831: 96.7188% ( 21) 00:08:02.577 11594.831 - 11645.243: 96.8750% ( 24) 00:08:02.577 11645.243 - 11695.655: 97.0703% ( 30) 00:08:02.577 11695.655 - 11746.068: 97.2396% ( 26) 00:08:02.577 11746.068 - 11796.480: 97.4349% ( 30) 00:08:02.577 11796.480 - 11846.892: 97.5846% ( 23) 00:08:02.577 11846.892 - 11897.305: 97.7474% ( 25) 00:08:02.577 11897.305 - 11947.717: 97.9232% ( 27) 00:08:02.577 11947.717 - 11998.129: 98.0664% ( 22) 00:08:02.577 11998.129 - 12048.542: 98.2422% ( 27) 00:08:02.577 12048.542 - 12098.954: 98.3789% ( 21) 00:08:02.577 12098.954 - 12149.366: 98.5221% ( 22) 00:08:02.577 12149.366 - 12199.778: 98.6263% ( 16) 00:08:02.577 12199.778 - 12250.191: 98.7044% ( 12) 00:08:02.577 12250.191 - 12300.603: 98.7956% ( 14) 00:08:02.577 12300.603 - 12351.015: 98.8802% ( 13) 00:08:02.577 12351.015 - 12401.428: 98.9453% ( 10) 00:08:02.577 12401.428 - 12451.840: 99.0234% ( 12) 00:08:02.577 12451.840 - 12502.252: 99.0820% ( 9) 00:08:02.577 12502.252 - 12552.665: 99.1471% ( 10) 00:08:02.577 12552.665 - 12603.077: 99.1667% ( 3) 00:08:02.577 20064.098 - 20164.923: 99.2057% ( 6) 00:08:02.577 20164.923 - 20265.748: 99.2318% ( 4) 00:08:02.577 20265.748 - 20366.572: 99.2578% ( 4) 00:08:02.577 20366.572 - 20467.397: 99.2839% ( 4) 00:08:02.577 20467.397 - 20568.222: 99.3099% ( 4) 00:08:02.577 20568.222 - 20669.046: 99.3359% ( 4) 00:08:02.577 20669.046 - 20769.871: 99.3620% ( 4) 00:08:02.577 20769.871 - 20870.695: 99.3880% ( 4) 00:08:02.577 20870.695 - 20971.520: 99.4141% ( 4) 00:08:02.577 20971.520 - 21072.345: 99.4401% ( 4) 00:08:02.577 21072.345 - 21173.169: 99.4727% ( 5) 00:08:02.577 21173.169 - 21273.994: 99.4987% ( 4) 00:08:02.577 21273.994 - 21374.818: 99.5247% ( 4) 00:08:02.577 21374.818 - 21475.643: 99.5508% ( 4) 00:08:02.577 21475.643 - 21576.468: 99.5833% ( 5) 00:08:02.577 25609.452 - 25710.277: 99.5964% ( 2) 00:08:02.577 25710.277 - 25811.102: 99.6159% ( 3) 00:08:02.577 25811.102 - 26012.751: 99.6680% ( 8) 00:08:02.577 26012.751 - 26214.400: 99.7135% ( 7) 00:08:02.577 26214.400 - 26416.049: 99.7591% ( 7) 00:08:02.577 26416.049 - 26617.698: 99.8112% ( 8) 00:08:02.577 26617.698 - 26819.348: 99.8633% ( 8) 00:08:02.577 26819.348 - 27020.997: 99.9089% ( 7) 00:08:02.577 27020.997 - 27222.646: 99.9609% ( 8) 00:08:02.577 27222.646 - 27424.295: 100.0000% ( 6) 00:08:02.577 00:08:02.577 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:02.577 ============================================================================== 00:08:02.577 Range in us Cumulative IO count 00:08:02.577 5747.003 - 5772.209: 0.0260% ( 4) 00:08:02.577 5772.209 - 5797.415: 0.0456% ( 3) 00:08:02.577 5797.415 - 5822.622: 0.1107% ( 10) 00:08:02.577 5822.622 - 5847.828: 0.2604% ( 23) 00:08:02.577 5847.828 - 5873.034: 0.3971% ( 21) 00:08:02.577 5873.034 - 5898.240: 0.6641% ( 41) 00:08:02.577 5898.240 - 5923.446: 0.9505% ( 44) 00:08:02.577 5923.446 - 5948.652: 1.1914% ( 37) 00:08:02.577 5948.652 - 5973.858: 1.5820% ( 60) 00:08:02.577 5973.858 - 5999.065: 2.1354% ( 85) 00:08:02.577 5999.065 - 6024.271: 2.6953% ( 86) 00:08:02.577 6024.271 - 6049.477: 3.2878% ( 91) 00:08:02.577 6049.477 - 6074.683: 3.9974% ( 109) 00:08:02.577 6074.683 - 6099.889: 4.6810% ( 105) 00:08:02.577 6099.889 - 6125.095: 5.3451% ( 102) 00:08:02.577 6125.095 - 6150.302: 6.1003% ( 116) 00:08:02.577 6150.302 - 6175.508: 6.7513% ( 100) 00:08:02.577 6175.508 - 6200.714: 7.5000% ( 115) 00:08:02.577 6200.714 - 6225.920: 8.2161% ( 110) 00:08:02.577 6225.920 - 6251.126: 9.0755% ( 132) 00:08:02.577 6251.126 - 6276.332: 10.0521% ( 150) 00:08:02.577 6276.332 - 6301.538: 10.8919% ( 129) 00:08:02.577 6301.538 - 6326.745: 11.6862% ( 122) 00:08:02.577 6326.745 - 6351.951: 12.5326% ( 130) 00:08:02.577 6351.951 - 6377.157: 13.4245% ( 137) 00:08:02.577 6377.157 - 6402.363: 14.3164% ( 137) 00:08:02.577 6402.363 - 6427.569: 15.2018% ( 136) 00:08:02.577 6427.569 - 6452.775: 16.1458% ( 145) 00:08:02.577 6452.775 - 6503.188: 18.1641% ( 310) 00:08:02.577 6503.188 - 6553.600: 20.2083% ( 314) 00:08:02.577 6553.600 - 6604.012: 22.1549% ( 299) 00:08:02.577 6604.012 - 6654.425: 24.1732% ( 310) 00:08:02.577 6654.425 - 6704.837: 25.9701% ( 276) 00:08:02.577 6704.837 - 6755.249: 27.7930% ( 280) 00:08:02.577 6755.249 - 6805.662: 29.3424% ( 238) 00:08:02.577 6805.662 - 6856.074: 30.6771% ( 205) 00:08:02.577 6856.074 - 6906.486: 31.8164% ( 175) 00:08:02.577 6906.486 - 6956.898: 32.9102% ( 168) 00:08:02.577 6956.898 - 7007.311: 33.8346% ( 142) 00:08:02.577 7007.311 - 7057.723: 34.7005% ( 133) 00:08:02.577 7057.723 - 7108.135: 35.3776% ( 104) 00:08:02.577 7108.135 - 7158.548: 35.9635% ( 90) 00:08:02.577 7158.548 - 7208.960: 36.5755% ( 94) 00:08:02.577 7208.960 - 7259.372: 37.1680% ( 91) 00:08:02.577 7259.372 - 7309.785: 37.5911% ( 65) 00:08:02.577 7309.785 - 7360.197: 38.0469% ( 70) 00:08:02.577 7360.197 - 7410.609: 38.5417% ( 76) 00:08:02.577 7410.609 - 7461.022: 38.9648% ( 65) 00:08:02.577 7461.022 - 7511.434: 39.5508% ( 90) 00:08:02.577 7511.434 - 7561.846: 40.0716% ( 80) 00:08:02.577 7561.846 - 7612.258: 40.7487% ( 104) 00:08:02.577 7612.258 - 7662.671: 41.7057% ( 147) 00:08:02.577 7662.671 - 7713.083: 42.7669% ( 163) 00:08:02.577 7713.083 - 7763.495: 43.5938% ( 127) 00:08:02.577 7763.495 - 7813.908: 44.5312% ( 144) 00:08:02.577 7813.908 - 7864.320: 45.4036% ( 134) 00:08:02.577 7864.320 - 7914.732: 46.3867% ( 151) 00:08:02.577 7914.732 - 7965.145: 47.2852% ( 138) 00:08:02.577 7965.145 - 8015.557: 48.2878% ( 154) 00:08:02.577 8015.557 - 8065.969: 49.4727% ( 182) 00:08:02.577 8065.969 - 8116.382: 50.6120% ( 175) 00:08:02.577 8116.382 - 8166.794: 51.6602% ( 161) 00:08:02.577 8166.794 - 8217.206: 52.8451% ( 182) 00:08:02.577 8217.206 - 8267.618: 54.0104% ( 179) 00:08:02.577 8267.618 - 8318.031: 55.2539% ( 191) 00:08:02.577 8318.031 - 8368.443: 56.5495% ( 199) 00:08:02.577 8368.443 - 8418.855: 57.9753% ( 219) 00:08:02.578 8418.855 - 8469.268: 59.3620% ( 213) 00:08:02.578 8469.268 - 8519.680: 60.6120% ( 192) 00:08:02.578 8519.680 - 8570.092: 61.9271% ( 202) 00:08:02.578 8570.092 - 8620.505: 63.3008% ( 211) 00:08:02.578 8620.505 - 8670.917: 64.5898% ( 198) 00:08:02.578 8670.917 - 8721.329: 65.8594% ( 195) 00:08:02.578 8721.329 - 8771.742: 66.9206% ( 163) 00:08:02.578 8771.742 - 8822.154: 67.8646% ( 145) 00:08:02.578 8822.154 - 8872.566: 68.6719% ( 124) 00:08:02.578 8872.566 - 8922.978: 69.5182% ( 130) 00:08:02.578 8922.978 - 8973.391: 70.2148% ( 107) 00:08:02.578 8973.391 - 9023.803: 70.9505% ( 113) 00:08:02.578 9023.803 - 9074.215: 71.6081% ( 101) 00:08:02.578 9074.215 - 9124.628: 72.2852% ( 104) 00:08:02.578 9124.628 - 9175.040: 72.9232% ( 98) 00:08:02.578 9175.040 - 9225.452: 73.5026% ( 89) 00:08:02.578 9225.452 - 9275.865: 74.0169% ( 79) 00:08:02.578 9275.865 - 9326.277: 74.4987% ( 74) 00:08:02.578 9326.277 - 9376.689: 75.0521% ( 85) 00:08:02.578 9376.689 - 9427.102: 75.6055% ( 85) 00:08:02.578 9427.102 - 9477.514: 76.0742% ( 72) 00:08:02.578 9477.514 - 9527.926: 76.4974% ( 65) 00:08:02.578 9527.926 - 9578.338: 76.9206% ( 65) 00:08:02.578 9578.338 - 9628.751: 77.3893% ( 72) 00:08:02.578 9628.751 - 9679.163: 77.8451% ( 70) 00:08:02.578 9679.163 - 9729.575: 78.2682% ( 65) 00:08:02.578 9729.575 - 9779.988: 78.6589% ( 60) 00:08:02.578 9779.988 - 9830.400: 79.1016% ( 68) 00:08:02.578 9830.400 - 9880.812: 79.6029% ( 77) 00:08:02.578 9880.812 - 9931.225: 80.1888% ( 90) 00:08:02.578 9931.225 - 9981.637: 80.7422% ( 85) 00:08:02.578 9981.637 - 10032.049: 81.3932% ( 100) 00:08:02.578 10032.049 - 10082.462: 82.0508% ( 101) 00:08:02.578 10082.462 - 10132.874: 82.7799% ( 112) 00:08:02.578 10132.874 - 10183.286: 83.5286% ( 115) 00:08:02.578 10183.286 - 10233.698: 84.2513% ( 111) 00:08:02.578 10233.698 - 10284.111: 84.9414% ( 106) 00:08:02.578 10284.111 - 10334.523: 85.6576% ( 110) 00:08:02.578 10334.523 - 10384.935: 86.3086% ( 100) 00:08:02.578 10384.935 - 10435.348: 86.9661% ( 101) 00:08:02.578 10435.348 - 10485.760: 87.6823% ( 110) 00:08:02.578 10485.760 - 10536.172: 88.3464% ( 102) 00:08:02.578 10536.172 - 10586.585: 88.9974% ( 100) 00:08:02.578 10586.585 - 10636.997: 89.6680% ( 103) 00:08:02.578 10636.997 - 10687.409: 90.3190% ( 100) 00:08:02.578 10687.409 - 10737.822: 91.0156% ( 107) 00:08:02.578 10737.822 - 10788.234: 91.6081% ( 91) 00:08:02.578 10788.234 - 10838.646: 92.1615% ( 85) 00:08:02.578 10838.646 - 10889.058: 92.6107% ( 69) 00:08:02.578 10889.058 - 10939.471: 93.0534% ( 68) 00:08:02.578 10939.471 - 10989.883: 93.4310% ( 58) 00:08:02.578 10989.883 - 11040.295: 93.7630% ( 51) 00:08:02.578 11040.295 - 11090.708: 94.0755% ( 48) 00:08:02.578 11090.708 - 11141.120: 94.3945% ( 49) 00:08:02.578 11141.120 - 11191.532: 94.7201% ( 50) 00:08:02.578 11191.532 - 11241.945: 95.0716% ( 54) 00:08:02.578 11241.945 - 11292.357: 95.3255% ( 39) 00:08:02.578 11292.357 - 11342.769: 95.5729% ( 38) 00:08:02.578 11342.769 - 11393.182: 95.8073% ( 36) 00:08:02.578 11393.182 - 11443.594: 96.0286% ( 34) 00:08:02.578 11443.594 - 11494.006: 96.1914% ( 25) 00:08:02.578 11494.006 - 11544.418: 96.3802% ( 29) 00:08:02.578 11544.418 - 11594.831: 96.5951% ( 33) 00:08:02.578 11594.831 - 11645.243: 96.8034% ( 32) 00:08:02.578 11645.243 - 11695.655: 96.9792% ( 27) 00:08:02.578 11695.655 - 11746.068: 97.1615% ( 28) 00:08:02.578 11746.068 - 11796.480: 97.3177% ( 24) 00:08:02.578 11796.480 - 11846.892: 97.4544% ( 21) 00:08:02.578 11846.892 - 11897.305: 97.5846% ( 20) 00:08:02.578 11897.305 - 11947.717: 97.7344% ( 23) 00:08:02.578 11947.717 - 11998.129: 97.8971% ( 25) 00:08:02.578 11998.129 - 12048.542: 98.0469% ( 23) 00:08:02.578 12048.542 - 12098.954: 98.1901% ( 22) 00:08:02.578 12098.954 - 12149.366: 98.3138% ( 19) 00:08:02.578 12149.366 - 12199.778: 98.4310% ( 18) 00:08:02.578 12199.778 - 12250.191: 98.5547% ( 19) 00:08:02.578 12250.191 - 12300.603: 98.6654% ( 17) 00:08:02.578 12300.603 - 12351.015: 98.7565% ( 14) 00:08:02.578 12351.015 - 12401.428: 98.8542% ( 15) 00:08:02.578 12401.428 - 12451.840: 98.9388% ( 13) 00:08:02.578 12451.840 - 12502.252: 98.9974% ( 9) 00:08:02.578 12502.252 - 12552.665: 99.0365% ( 6) 00:08:02.578 12552.665 - 12603.077: 99.0560% ( 3) 00:08:02.578 12603.077 - 12653.489: 99.0755% ( 3) 00:08:02.578 12653.489 - 12703.902: 99.0885% ( 2) 00:08:02.578 12703.902 - 12754.314: 99.1016% ( 2) 00:08:02.578 12754.314 - 12804.726: 99.1146% ( 2) 00:08:02.578 12804.726 - 12855.138: 99.1341% ( 3) 00:08:02.578 12855.138 - 12905.551: 99.1471% ( 2) 00:08:02.578 12905.551 - 13006.375: 99.1667% ( 3) 00:08:02.578 18854.203 - 18955.028: 99.1797% ( 2) 00:08:02.578 18955.028 - 19055.852: 99.2057% ( 4) 00:08:02.578 19055.852 - 19156.677: 99.2188% ( 2) 00:08:02.578 19156.677 - 19257.502: 99.2448% ( 4) 00:08:02.578 19257.502 - 19358.326: 99.2708% ( 4) 00:08:02.578 19358.326 - 19459.151: 99.2904% ( 3) 00:08:02.578 19459.151 - 19559.975: 99.3164% ( 4) 00:08:02.578 19559.975 - 19660.800: 99.3424% ( 4) 00:08:02.578 19660.800 - 19761.625: 99.3620% ( 3) 00:08:02.578 19761.625 - 19862.449: 99.3880% ( 4) 00:08:02.578 19862.449 - 19963.274: 99.4141% ( 4) 00:08:02.578 19963.274 - 20064.098: 99.4401% ( 4) 00:08:02.578 20064.098 - 20164.923: 99.4596% ( 3) 00:08:02.578 20164.923 - 20265.748: 99.4857% ( 4) 00:08:02.578 20265.748 - 20366.572: 99.5052% ( 3) 00:08:02.578 20366.572 - 20467.397: 99.5312% ( 4) 00:08:02.578 20467.397 - 20568.222: 99.5508% ( 3) 00:08:02.578 20568.222 - 20669.046: 99.5768% ( 4) 00:08:02.578 20669.046 - 20769.871: 99.5833% ( 1) 00:08:02.578 24500.382 - 24601.206: 99.5898% ( 1) 00:08:02.578 24601.206 - 24702.031: 99.6159% ( 4) 00:08:02.578 24702.031 - 24802.855: 99.6354% ( 3) 00:08:02.578 24802.855 - 24903.680: 99.6615% ( 4) 00:08:02.578 24903.680 - 25004.505: 99.6875% ( 4) 00:08:02.578 25004.505 - 25105.329: 99.7135% ( 4) 00:08:02.578 25105.329 - 25206.154: 99.7396% ( 4) 00:08:02.578 25206.154 - 25306.978: 99.7656% ( 4) 00:08:02.578 25306.978 - 25407.803: 99.7852% ( 3) 00:08:02.578 25407.803 - 25508.628: 99.8112% ( 4) 00:08:02.578 25508.628 - 25609.452: 99.8372% ( 4) 00:08:02.578 25609.452 - 25710.277: 99.8568% ( 3) 00:08:02.578 25710.277 - 25811.102: 99.8828% ( 4) 00:08:02.578 25811.102 - 26012.751: 99.9349% ( 8) 00:08:02.578 26012.751 - 26214.400: 99.9805% ( 7) 00:08:02.578 26214.400 - 26416.049: 100.0000% ( 3) 00:08:02.578 00:08:02.578 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:02.578 ============================================================================== 00:08:02.578 Range in us Cumulative IO count 00:08:02.578 5772.209 - 5797.415: 0.0130% ( 2) 00:08:02.578 5797.415 - 5822.622: 0.0586% ( 7) 00:08:02.578 5822.622 - 5847.828: 0.1562% ( 15) 00:08:02.578 5847.828 - 5873.034: 0.3711% ( 33) 00:08:02.578 5873.034 - 5898.240: 0.7031% ( 51) 00:08:02.578 5898.240 - 5923.446: 1.0482% ( 53) 00:08:02.578 5923.446 - 5948.652: 1.2695% ( 34) 00:08:02.578 5948.652 - 5973.858: 1.5495% ( 43) 00:08:02.578 5973.858 - 5999.065: 1.9792% ( 66) 00:08:02.578 5999.065 - 6024.271: 2.5326% ( 85) 00:08:02.578 6024.271 - 6049.477: 3.1641% ( 97) 00:08:02.578 6049.477 - 6074.683: 3.8086% ( 99) 00:08:02.578 6074.683 - 6099.889: 4.5508% ( 114) 00:08:02.578 6099.889 - 6125.095: 5.3320% ( 120) 00:08:02.578 6125.095 - 6150.302: 6.1133% ( 120) 00:08:02.578 6150.302 - 6175.508: 6.8359% ( 111) 00:08:02.578 6175.508 - 6200.714: 7.5521% ( 110) 00:08:02.578 6200.714 - 6225.920: 8.2357% ( 105) 00:08:02.578 6225.920 - 6251.126: 8.9974% ( 117) 00:08:02.578 6251.126 - 6276.332: 9.8503% ( 131) 00:08:02.578 6276.332 - 6301.538: 10.8333% ( 151) 00:08:02.578 6301.538 - 6326.745: 11.7318% ( 138) 00:08:02.578 6326.745 - 6351.951: 12.6042% ( 134) 00:08:02.578 6351.951 - 6377.157: 13.4701% ( 133) 00:08:02.578 6377.157 - 6402.363: 14.3880% ( 141) 00:08:02.579 6402.363 - 6427.569: 15.2734% ( 136) 00:08:02.579 6427.569 - 6452.775: 16.2174% ( 145) 00:08:02.579 6452.775 - 6503.188: 18.1510% ( 297) 00:08:02.579 6503.188 - 6553.600: 20.2083% ( 316) 00:08:02.579 6553.600 - 6604.012: 22.2786% ( 318) 00:08:02.579 6604.012 - 6654.425: 24.1862% ( 293) 00:08:02.579 6654.425 - 6704.837: 26.0156% ( 281) 00:08:02.579 6704.837 - 6755.249: 27.6302% ( 248) 00:08:02.579 6755.249 - 6805.662: 29.2057% ( 242) 00:08:02.579 6805.662 - 6856.074: 30.5339% ( 204) 00:08:02.579 6856.074 - 6906.486: 31.7773% ( 191) 00:08:02.579 6906.486 - 6956.898: 32.8190% ( 160) 00:08:02.579 6956.898 - 7007.311: 33.7565% ( 144) 00:08:02.579 7007.311 - 7057.723: 34.5768% ( 126) 00:08:02.579 7057.723 - 7108.135: 35.3060% ( 112) 00:08:02.579 7108.135 - 7158.548: 35.8854% ( 89) 00:08:02.579 7158.548 - 7208.960: 36.4648% ( 89) 00:08:02.579 7208.960 - 7259.372: 36.9727% ( 78) 00:08:02.579 7259.372 - 7309.785: 37.5000% ( 81) 00:08:02.579 7309.785 - 7360.197: 37.9688% ( 72) 00:08:02.579 7360.197 - 7410.609: 38.4440% ( 73) 00:08:02.579 7410.609 - 7461.022: 38.8867% ( 68) 00:08:02.579 7461.022 - 7511.434: 39.3685% ( 74) 00:08:02.579 7511.434 - 7561.846: 40.0195% ( 100) 00:08:02.579 7561.846 - 7612.258: 40.7878% ( 118) 00:08:02.579 7612.258 - 7662.671: 41.7057% ( 141) 00:08:02.579 7662.671 - 7713.083: 42.7214% ( 156) 00:08:02.579 7713.083 - 7763.495: 43.7956% ( 165) 00:08:02.579 7763.495 - 7813.908: 44.7526% ( 147) 00:08:02.579 7813.908 - 7864.320: 45.5208% ( 118) 00:08:02.579 7864.320 - 7914.732: 46.4062% ( 136) 00:08:02.579 7914.732 - 7965.145: 47.2656% ( 132) 00:08:02.579 7965.145 - 8015.557: 48.3398% ( 165) 00:08:02.579 8015.557 - 8065.969: 49.3099% ( 149) 00:08:02.579 8065.969 - 8116.382: 50.4492% ( 175) 00:08:02.579 8116.382 - 8166.794: 51.6406% ( 183) 00:08:02.579 8166.794 - 8217.206: 52.8255% ( 182) 00:08:02.579 8217.206 - 8267.618: 54.1016% ( 196) 00:08:02.579 8267.618 - 8318.031: 55.4036% ( 200) 00:08:02.579 8318.031 - 8368.443: 56.7643% ( 209) 00:08:02.579 8368.443 - 8418.855: 58.0078% ( 191) 00:08:02.579 8418.855 - 8469.268: 59.3685% ( 209) 00:08:02.579 8469.268 - 8519.680: 60.7161% ( 207) 00:08:02.579 8519.680 - 8570.092: 62.0964% ( 212) 00:08:02.579 8570.092 - 8620.505: 63.5091% ( 217) 00:08:02.579 8620.505 - 8670.917: 64.7852% ( 196) 00:08:02.579 8670.917 - 8721.329: 66.0286% ( 191) 00:08:02.579 8721.329 - 8771.742: 67.0573% ( 158) 00:08:02.579 8771.742 - 8822.154: 67.9557% ( 138) 00:08:02.579 8822.154 - 8872.566: 68.7370% ( 120) 00:08:02.579 8872.566 - 8922.978: 69.5312% ( 122) 00:08:02.579 8922.978 - 8973.391: 70.2604% ( 112) 00:08:02.579 8973.391 - 9023.803: 70.9375% ( 104) 00:08:02.579 9023.803 - 9074.215: 71.6406% ( 108) 00:08:02.579 9074.215 - 9124.628: 72.2526% ( 94) 00:08:02.579 9124.628 - 9175.040: 72.7669% ( 79) 00:08:02.579 9175.040 - 9225.452: 73.3398% ( 88) 00:08:02.579 9225.452 - 9275.865: 73.9453% ( 93) 00:08:02.579 9275.865 - 9326.277: 74.5312% ( 90) 00:08:02.579 9326.277 - 9376.689: 75.0391% ( 78) 00:08:02.579 9376.689 - 9427.102: 75.5990% ( 86) 00:08:02.579 9427.102 - 9477.514: 76.0872% ( 75) 00:08:02.579 9477.514 - 9527.926: 76.5234% ( 67) 00:08:02.579 9527.926 - 9578.338: 76.9792% ( 70) 00:08:02.579 9578.338 - 9628.751: 77.4349% ( 70) 00:08:02.579 9628.751 - 9679.163: 77.8971% ( 71) 00:08:02.579 9679.163 - 9729.575: 78.3594% ( 71) 00:08:02.579 9729.575 - 9779.988: 78.8411% ( 74) 00:08:02.579 9779.988 - 9830.400: 79.3490% ( 78) 00:08:02.579 9830.400 - 9880.812: 79.8893% ( 83) 00:08:02.579 9880.812 - 9931.225: 80.5013% ( 94) 00:08:02.579 9931.225 - 9981.637: 81.1263% ( 96) 00:08:02.579 9981.637 - 10032.049: 81.7773% ( 100) 00:08:02.579 10032.049 - 10082.462: 82.4023% ( 96) 00:08:02.579 10082.462 - 10132.874: 82.9948% ( 91) 00:08:02.579 10132.874 - 10183.286: 83.5677% ( 88) 00:08:02.579 10183.286 - 10233.698: 84.1992% ( 97) 00:08:02.579 10233.698 - 10284.111: 84.7461% ( 84) 00:08:02.579 10284.111 - 10334.523: 85.3190% ( 88) 00:08:02.579 10334.523 - 10384.935: 85.9635% ( 99) 00:08:02.579 10384.935 - 10435.348: 86.7122% ( 115) 00:08:02.579 10435.348 - 10485.760: 87.3828% ( 103) 00:08:02.579 10485.760 - 10536.172: 88.0534% ( 103) 00:08:02.579 10536.172 - 10586.585: 88.7500% ( 107) 00:08:02.579 10586.585 - 10636.997: 89.3945% ( 99) 00:08:02.579 10636.997 - 10687.409: 90.0260% ( 97) 00:08:02.579 10687.409 - 10737.822: 90.7161% ( 106) 00:08:02.579 10737.822 - 10788.234: 91.3411% ( 96) 00:08:02.579 10788.234 - 10838.646: 91.8555% ( 79) 00:08:02.579 10838.646 - 10889.058: 92.3568% ( 77) 00:08:02.579 10889.058 - 10939.471: 92.8060% ( 69) 00:08:02.579 10939.471 - 10989.883: 93.2682% ( 71) 00:08:02.579 10989.883 - 11040.295: 93.6979% ( 66) 00:08:02.579 11040.295 - 11090.708: 94.1081% ( 63) 00:08:02.579 11090.708 - 11141.120: 94.5117% ( 62) 00:08:02.579 11141.120 - 11191.532: 94.9089% ( 61) 00:08:02.579 11191.532 - 11241.945: 95.2279% ( 49) 00:08:02.579 11241.945 - 11292.357: 95.5078% ( 43) 00:08:02.579 11292.357 - 11342.769: 95.7096% ( 31) 00:08:02.579 11342.769 - 11393.182: 95.9180% ( 32) 00:08:02.579 11393.182 - 11443.594: 96.1263% ( 32) 00:08:02.579 11443.594 - 11494.006: 96.2826% ( 24) 00:08:02.579 11494.006 - 11544.418: 96.4453% ( 25) 00:08:02.579 11544.418 - 11594.831: 96.5625% ( 18) 00:08:02.579 11594.831 - 11645.243: 96.6862% ( 19) 00:08:02.579 11645.243 - 11695.655: 96.8294% ( 22) 00:08:02.579 11695.655 - 11746.068: 96.9857% ( 24) 00:08:02.579 11746.068 - 11796.480: 97.1419% ( 24) 00:08:02.579 11796.480 - 11846.892: 97.2852% ( 22) 00:08:02.579 11846.892 - 11897.305: 97.4674% ( 28) 00:08:02.579 11897.305 - 11947.717: 97.6693% ( 31) 00:08:02.579 11947.717 - 11998.129: 97.8516% ( 28) 00:08:02.579 11998.129 - 12048.542: 98.0143% ( 25) 00:08:02.579 12048.542 - 12098.954: 98.1641% ( 23) 00:08:02.579 12098.954 - 12149.366: 98.3203% ( 24) 00:08:02.579 12149.366 - 12199.778: 98.4570% ( 21) 00:08:02.579 12199.778 - 12250.191: 98.5938% ( 21) 00:08:02.579 12250.191 - 12300.603: 98.7044% ( 17) 00:08:02.579 12300.603 - 12351.015: 98.7956% ( 14) 00:08:02.579 12351.015 - 12401.428: 98.8867% ( 14) 00:08:02.579 12401.428 - 12451.840: 98.9714% ( 13) 00:08:02.579 12451.840 - 12502.252: 99.0560% ( 13) 00:08:02.579 12502.252 - 12552.665: 99.1081% ( 8) 00:08:02.579 12552.665 - 12603.077: 99.1406% ( 5) 00:08:02.579 12603.077 - 12653.489: 99.1667% ( 4) 00:08:02.579 17241.009 - 17341.834: 99.1732% ( 1) 00:08:02.579 17341.834 - 17442.658: 99.1927% ( 3) 00:08:02.579 17442.658 - 17543.483: 99.2188% ( 4) 00:08:02.579 17543.483 - 17644.308: 99.2448% ( 4) 00:08:02.579 17644.308 - 17745.132: 99.2708% ( 4) 00:08:02.579 17745.132 - 17845.957: 99.2969% ( 4) 00:08:02.579 17845.957 - 17946.782: 99.3164% ( 3) 00:08:02.579 17946.782 - 18047.606: 99.3424% ( 4) 00:08:02.579 18047.606 - 18148.431: 99.3685% ( 4) 00:08:02.579 18148.431 - 18249.255: 99.3945% ( 4) 00:08:02.579 18249.255 - 18350.080: 99.4206% ( 4) 00:08:02.579 18350.080 - 18450.905: 99.4401% ( 3) 00:08:02.579 18450.905 - 18551.729: 99.4661% ( 4) 00:08:02.579 18551.729 - 18652.554: 99.4922% ( 4) 00:08:02.579 18652.554 - 18753.378: 99.5182% ( 4) 00:08:02.579 18753.378 - 18854.203: 99.5443% ( 4) 00:08:02.579 18854.203 - 18955.028: 99.5638% ( 3) 00:08:02.579 18955.028 - 19055.852: 99.5833% ( 3) 00:08:02.579 22887.188 - 22988.012: 99.6029% ( 3) 00:08:02.579 22988.012 - 23088.837: 99.6289% ( 4) 00:08:02.579 23088.837 - 23189.662: 99.6549% ( 4) 00:08:02.579 23189.662 - 23290.486: 99.6745% ( 3) 00:08:02.579 23290.486 - 23391.311: 99.6940% ( 3) 00:08:02.579 23391.311 - 23492.135: 99.7201% ( 4) 00:08:02.579 23492.135 - 23592.960: 99.7461% ( 4) 00:08:02.579 23592.960 - 23693.785: 99.7721% ( 4) 00:08:02.579 23693.785 - 23794.609: 99.7917% ( 3) 00:08:02.579 23794.609 - 23895.434: 99.8177% ( 4) 00:08:02.579 23895.434 - 23996.258: 99.8438% ( 4) 00:08:02.579 23996.258 - 24097.083: 99.8698% ( 4) 00:08:02.579 24097.083 - 24197.908: 99.8958% ( 4) 00:08:02.579 24197.908 - 24298.732: 99.9154% ( 3) 00:08:02.579 24298.732 - 24399.557: 99.9349% ( 3) 00:08:02.579 24399.557 - 24500.382: 99.9609% ( 4) 00:08:02.579 24500.382 - 24601.206: 99.9870% ( 4) 00:08:02.579 24601.206 - 24702.031: 100.0000% ( 2) 00:08:02.579 00:08:02.579 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:02.579 ============================================================================== 00:08:02.579 Range in us Cumulative IO count 00:08:02.579 5747.003 - 5772.209: 0.0065% ( 1) 00:08:02.579 5772.209 - 5797.415: 0.0260% ( 3) 00:08:02.579 5797.415 - 5822.622: 0.0586% ( 5) 00:08:02.579 5822.622 - 5847.828: 0.1172% ( 9) 00:08:02.579 5847.828 - 5873.034: 0.2930% ( 27) 00:08:02.579 5873.034 - 5898.240: 0.5924% ( 46) 00:08:02.579 5898.240 - 5923.446: 0.9831% ( 60) 00:08:02.579 5923.446 - 5948.652: 1.3477% ( 56) 00:08:02.579 5948.652 - 5973.858: 1.6992% ( 54) 00:08:02.579 5973.858 - 5999.065: 2.0638% ( 56) 00:08:02.579 5999.065 - 6024.271: 2.5521% ( 75) 00:08:02.579 6024.271 - 6049.477: 3.2161% ( 102) 00:08:02.579 6049.477 - 6074.683: 3.8346% ( 95) 00:08:02.579 6074.683 - 6099.889: 4.5964% ( 117) 00:08:02.579 6099.889 - 6125.095: 5.2474% ( 100) 00:08:02.579 6125.095 - 6150.302: 6.0482% ( 123) 00:08:02.579 6150.302 - 6175.508: 6.8424% ( 122) 00:08:02.579 6175.508 - 6200.714: 7.6237% ( 120) 00:08:02.579 6200.714 - 6225.920: 8.3919% ( 118) 00:08:02.579 6225.920 - 6251.126: 9.2318% ( 129) 00:08:02.580 6251.126 - 6276.332: 10.1172% ( 136) 00:08:02.580 6276.332 - 6301.538: 10.9570% ( 129) 00:08:02.580 6301.538 - 6326.745: 11.8815% ( 142) 00:08:02.580 6326.745 - 6351.951: 12.8776% ( 153) 00:08:02.580 6351.951 - 6377.157: 13.7435% ( 133) 00:08:02.580 6377.157 - 6402.363: 14.6680% ( 142) 00:08:02.580 6402.363 - 6427.569: 15.6055% ( 144) 00:08:02.580 6427.569 - 6452.775: 16.5234% ( 141) 00:08:02.580 6452.775 - 6503.188: 18.4310% ( 293) 00:08:02.580 6503.188 - 6553.600: 20.3711% ( 298) 00:08:02.580 6553.600 - 6604.012: 22.3893% ( 310) 00:08:02.580 6604.012 - 6654.425: 24.2643% ( 288) 00:08:02.580 6654.425 - 6704.837: 25.9831% ( 264) 00:08:02.580 6704.837 - 6755.249: 27.5326% ( 238) 00:08:02.580 6755.249 - 6805.662: 29.0560% ( 234) 00:08:02.580 6805.662 - 6856.074: 30.4492% ( 214) 00:08:02.580 6856.074 - 6906.486: 31.6602% ( 186) 00:08:02.580 6906.486 - 6956.898: 32.5977% ( 144) 00:08:02.580 6956.898 - 7007.311: 33.4831% ( 136) 00:08:02.580 7007.311 - 7057.723: 34.3359% ( 131) 00:08:02.580 7057.723 - 7108.135: 34.9740% ( 98) 00:08:02.580 7108.135 - 7158.548: 35.5664% ( 91) 00:08:02.580 7158.548 - 7208.960: 36.1719% ( 93) 00:08:02.580 7208.960 - 7259.372: 36.8099% ( 98) 00:08:02.580 7259.372 - 7309.785: 37.3307% ( 80) 00:08:02.580 7309.785 - 7360.197: 37.8385% ( 78) 00:08:02.580 7360.197 - 7410.609: 38.3008% ( 71) 00:08:02.580 7410.609 - 7461.022: 38.8281% ( 81) 00:08:02.580 7461.022 - 7511.434: 39.3620% ( 82) 00:08:02.580 7511.434 - 7561.846: 40.0065% ( 99) 00:08:02.580 7561.846 - 7612.258: 40.8398% ( 128) 00:08:02.580 7612.258 - 7662.671: 41.6992% ( 132) 00:08:02.580 7662.671 - 7713.083: 42.8841% ( 182) 00:08:02.580 7713.083 - 7763.495: 43.8997% ( 156) 00:08:02.580 7763.495 - 7813.908: 44.9479% ( 161) 00:08:02.580 7813.908 - 7864.320: 46.0091% ( 163) 00:08:02.580 7864.320 - 7914.732: 47.0443% ( 159) 00:08:02.580 7914.732 - 7965.145: 48.0534% ( 155) 00:08:02.580 7965.145 - 8015.557: 49.1797% ( 173) 00:08:02.580 8015.557 - 8065.969: 50.3581% ( 181) 00:08:02.580 8065.969 - 8116.382: 51.5234% ( 179) 00:08:02.580 8116.382 - 8166.794: 52.5521% ( 158) 00:08:02.580 8166.794 - 8217.206: 53.6328% ( 166) 00:08:02.580 8217.206 - 8267.618: 54.7266% ( 168) 00:08:02.580 8267.618 - 8318.031: 55.8008% ( 165) 00:08:02.580 8318.031 - 8368.443: 56.9531% ( 177) 00:08:02.580 8368.443 - 8418.855: 58.1380% ( 182) 00:08:02.580 8418.855 - 8469.268: 59.5898% ( 223) 00:08:02.580 8469.268 - 8519.680: 60.9701% ( 212) 00:08:02.580 8519.680 - 8570.092: 62.2591% ( 198) 00:08:02.580 8570.092 - 8620.505: 63.5221% ( 194) 00:08:02.580 8620.505 - 8670.917: 64.6940% ( 180) 00:08:02.580 8670.917 - 8721.329: 65.8333% ( 175) 00:08:02.580 8721.329 - 8771.742: 66.9076% ( 165) 00:08:02.580 8771.742 - 8822.154: 67.8776% ( 149) 00:08:02.580 8822.154 - 8872.566: 68.6914% ( 125) 00:08:02.580 8872.566 - 8922.978: 69.4596% ( 118) 00:08:02.580 8922.978 - 8973.391: 70.1562% ( 107) 00:08:02.580 8973.391 - 9023.803: 70.8268% ( 103) 00:08:02.580 9023.803 - 9074.215: 71.4453% ( 95) 00:08:02.580 9074.215 - 9124.628: 72.1159% ( 103) 00:08:02.580 9124.628 - 9175.040: 72.8190% ( 108) 00:08:02.580 9175.040 - 9225.452: 73.5221% ( 108) 00:08:02.580 9225.452 - 9275.865: 74.1146% ( 91) 00:08:02.580 9275.865 - 9326.277: 74.6549% ( 83) 00:08:02.580 9326.277 - 9376.689: 75.1758% ( 80) 00:08:02.580 9376.689 - 9427.102: 75.6771% ( 77) 00:08:02.580 9427.102 - 9477.514: 76.1849% ( 78) 00:08:02.580 9477.514 - 9527.926: 76.6602% ( 73) 00:08:02.580 9527.926 - 9578.338: 77.1615% ( 77) 00:08:02.580 9578.338 - 9628.751: 77.5651% ( 62) 00:08:02.580 9628.751 - 9679.163: 77.9297% ( 56) 00:08:02.580 9679.163 - 9729.575: 78.3073% ( 58) 00:08:02.580 9729.575 - 9779.988: 78.7240% ( 64) 00:08:02.580 9779.988 - 9830.400: 79.2253% ( 77) 00:08:02.580 9830.400 - 9880.812: 79.7526% ( 81) 00:08:02.580 9880.812 - 9931.225: 80.3581% ( 93) 00:08:02.580 9931.225 - 9981.637: 80.9115% ( 85) 00:08:02.580 9981.637 - 10032.049: 81.4844% ( 88) 00:08:02.580 10032.049 - 10082.462: 82.1289% ( 99) 00:08:02.580 10082.462 - 10132.874: 82.7474% ( 95) 00:08:02.580 10132.874 - 10183.286: 83.3984% ( 100) 00:08:02.580 10183.286 - 10233.698: 84.1211% ( 111) 00:08:02.580 10233.698 - 10284.111: 84.7135% ( 91) 00:08:02.580 10284.111 - 10334.523: 85.3190% ( 93) 00:08:02.580 10334.523 - 10384.935: 85.8724% ( 85) 00:08:02.580 10384.935 - 10435.348: 86.4779% ( 93) 00:08:02.580 10435.348 - 10485.760: 87.1354% ( 101) 00:08:02.580 10485.760 - 10536.172: 87.7799% ( 99) 00:08:02.580 10536.172 - 10586.585: 88.3854% ( 93) 00:08:02.580 10586.585 - 10636.997: 89.0755% ( 106) 00:08:02.580 10636.997 - 10687.409: 89.7331% ( 101) 00:08:02.580 10687.409 - 10737.822: 90.3841% ( 100) 00:08:02.580 10737.822 - 10788.234: 90.9440% ( 86) 00:08:02.580 10788.234 - 10838.646: 91.5039% ( 86) 00:08:02.580 10838.646 - 10889.058: 92.0703% ( 87) 00:08:02.580 10889.058 - 10939.471: 92.5391% ( 72) 00:08:02.580 10939.471 - 10989.883: 92.9753% ( 67) 00:08:02.580 10989.883 - 11040.295: 93.4049% ( 66) 00:08:02.580 11040.295 - 11090.708: 93.8346% ( 66) 00:08:02.580 11090.708 - 11141.120: 94.2773% ( 68) 00:08:02.580 11141.120 - 11191.532: 94.6289% ( 54) 00:08:02.580 11191.532 - 11241.945: 95.0065% ( 58) 00:08:02.580 11241.945 - 11292.357: 95.3451% ( 52) 00:08:02.580 11292.357 - 11342.769: 95.6576% ( 48) 00:08:02.580 11342.769 - 11393.182: 95.9245% ( 41) 00:08:02.580 11393.182 - 11443.594: 96.1458% ( 34) 00:08:02.580 11443.594 - 11494.006: 96.3737% ( 35) 00:08:02.580 11494.006 - 11544.418: 96.5495% ( 27) 00:08:02.580 11544.418 - 11594.831: 96.6667% ( 18) 00:08:02.580 11594.831 - 11645.243: 96.7839% ( 18) 00:08:02.580 11645.243 - 11695.655: 96.9076% ( 19) 00:08:02.580 11695.655 - 11746.068: 97.0378% ( 20) 00:08:02.580 11746.068 - 11796.480: 97.2070% ( 26) 00:08:02.580 11796.480 - 11846.892: 97.3568% ( 23) 00:08:02.580 11846.892 - 11897.305: 97.5326% ( 27) 00:08:02.580 11897.305 - 11947.717: 97.6888% ( 24) 00:08:02.580 11947.717 - 11998.129: 97.8320% ( 22) 00:08:02.580 11998.129 - 12048.542: 97.9753% ( 22) 00:08:02.580 12048.542 - 12098.954: 98.0859% ( 17) 00:08:02.580 12098.954 - 12149.366: 98.1771% ( 14) 00:08:02.580 12149.366 - 12199.778: 98.2812% ( 16) 00:08:02.580 12199.778 - 12250.191: 98.3659% ( 13) 00:08:02.580 12250.191 - 12300.603: 98.4375% ( 11) 00:08:02.580 12300.603 - 12351.015: 98.5091% ( 11) 00:08:02.580 12351.015 - 12401.428: 98.5938% ( 13) 00:08:02.580 12401.428 - 12451.840: 98.6784% ( 13) 00:08:02.580 12451.840 - 12502.252: 98.7695% ( 14) 00:08:02.580 12502.252 - 12552.665: 98.8281% ( 9) 00:08:02.580 12552.665 - 12603.077: 98.8932% ( 10) 00:08:02.580 12603.077 - 12653.489: 98.9323% ( 6) 00:08:02.580 12653.489 - 12703.902: 98.9714% ( 6) 00:08:02.580 12703.902 - 12754.314: 99.0169% ( 7) 00:08:02.580 12754.314 - 12804.726: 99.0495% ( 5) 00:08:02.580 12804.726 - 12855.138: 99.0820% ( 5) 00:08:02.580 12855.138 - 12905.551: 99.0951% ( 2) 00:08:02.580 12905.551 - 13006.375: 99.1276% ( 5) 00:08:02.580 13006.375 - 13107.200: 99.1602% ( 5) 00:08:02.580 13107.200 - 13208.025: 99.1667% ( 1) 00:08:02.580 15627.815 - 15728.640: 99.1732% ( 1) 00:08:02.580 15728.640 - 15829.465: 99.1927% ( 3) 00:08:02.580 15829.465 - 15930.289: 99.2122% ( 3) 00:08:02.580 15930.289 - 16031.114: 99.2448% ( 5) 00:08:02.580 16031.114 - 16131.938: 99.2708% ( 4) 00:08:02.580 16131.938 - 16232.763: 99.2904% ( 3) 00:08:02.580 16232.763 - 16333.588: 99.3164% ( 4) 00:08:02.580 16333.588 - 16434.412: 99.3424% ( 4) 00:08:02.580 16434.412 - 16535.237: 99.3620% ( 3) 00:08:02.580 16535.237 - 16636.062: 99.3880% ( 4) 00:08:02.580 16636.062 - 16736.886: 99.4141% ( 4) 00:08:02.580 16736.886 - 16837.711: 99.4401% ( 4) 00:08:02.580 16837.711 - 16938.535: 99.4661% ( 4) 00:08:02.580 16938.535 - 17039.360: 99.4857% ( 3) 00:08:02.580 17039.360 - 17140.185: 99.5117% ( 4) 00:08:02.580 17140.185 - 17241.009: 99.5378% ( 4) 00:08:02.580 17241.009 - 17341.834: 99.5638% ( 4) 00:08:02.580 17341.834 - 17442.658: 99.5833% ( 3) 00:08:02.580 21273.994 - 21374.818: 99.6029% ( 3) 00:08:02.580 21374.818 - 21475.643: 99.6289% ( 4) 00:08:02.580 21475.643 - 21576.468: 99.6549% ( 4) 00:08:02.580 21576.468 - 21677.292: 99.6745% ( 3) 00:08:02.580 21677.292 - 21778.117: 99.7005% ( 4) 00:08:02.580 21778.117 - 21878.942: 99.7266% ( 4) 00:08:02.580 21878.942 - 21979.766: 99.7461% ( 3) 00:08:02.580 21979.766 - 22080.591: 99.7721% ( 4) 00:08:02.580 22080.591 - 22181.415: 99.7982% ( 4) 00:08:02.580 22181.415 - 22282.240: 99.8242% ( 4) 00:08:02.580 22282.240 - 22383.065: 99.8503% ( 4) 00:08:02.580 22383.065 - 22483.889: 99.8698% ( 3) 00:08:02.580 22483.889 - 22584.714: 99.8958% ( 4) 00:08:02.580 22584.714 - 22685.538: 99.9219% ( 4) 00:08:02.580 22685.538 - 22786.363: 99.9414% ( 3) 00:08:02.580 22786.363 - 22887.188: 99.9674% ( 4) 00:08:02.580 22887.188 - 22988.012: 99.9935% ( 4) 00:08:02.580 22988.012 - 23088.837: 100.0000% ( 1) 00:08:02.580 00:08:02.580 09:20:27 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:03.527 Initializing NVMe Controllers 00:08:03.527 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.527 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.527 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.527 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.527 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:03.527 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:03.527 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:03.527 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:03.527 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:03.527 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:03.527 Initialization complete. Launching workers. 00:08:03.527 ======================================================== 00:08:03.527 Latency(us) 00:08:03.527 Device Information : IOPS MiB/s Average min max 00:08:03.527 PCIE (0000:00:13.0) NSID 1 from core 0: 8628.09 101.11 14856.78 5488.95 291643.85 00:08:03.527 PCIE (0000:00:10.0) NSID 1 from core 0: 8687.98 101.81 14730.69 6535.56 293767.91 00:08:03.527 PCIE (0000:00:11.0) NSID 1 from core 0: 8687.98 101.81 14706.13 6663.63 293665.84 00:08:03.527 PCIE (0000:00:12.0) NSID 1 from core 0: 8687.98 101.81 14682.97 5923.99 294019.50 00:08:03.527 PCIE (0000:00:12.0) NSID 2 from core 0: 8687.98 101.81 14659.82 5646.55 293890.11 00:08:03.527 PCIE (0000:00:12.0) NSID 3 from core 0: 8751.86 102.56 14529.64 5597.61 294074.08 00:08:03.527 ======================================================== 00:08:03.527 Total : 52131.87 610.92 14693.95 5488.95 294074.08 00:08:03.527 00:08:03.527 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:03.527 ================================================================================= 00:08:03.527 1.00000% : 6074.683us 00:08:03.527 10.00000% : 8015.557us 00:08:03.527 25.00000% : 8318.031us 00:08:03.527 50.00000% : 8771.742us 00:08:03.527 75.00000% : 9830.400us 00:08:03.527 90.00000% : 11645.243us 00:08:03.527 95.00000% : 32263.877us 00:08:03.527 98.00000% : 72997.022us 00:08:03.527 99.00000% : 290374.892us 00:08:03.527 99.50000% : 291988.086us 00:08:03.527 99.90000% : 291988.086us 00:08:03.527 99.99000% : 291988.086us 00:08:03.527 99.99900% : 291988.086us 00:08:03.527 99.99990% : 291988.086us 00:08:03.527 99.99999% : 291988.086us 00:08:03.527 00:08:03.527 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:03.527 ================================================================================= 00:08:03.527 1.00000% : 7007.311us 00:08:03.527 10.00000% : 7965.145us 00:08:03.527 25.00000% : 8318.031us 00:08:03.527 50.00000% : 8822.154us 00:08:03.527 75.00000% : 9880.812us 00:08:03.527 90.00000% : 11897.305us 00:08:03.527 95.00000% : 31457.280us 00:08:03.527 98.00000% : 73803.618us 00:08:03.527 99.00000% : 285535.311us 00:08:03.527 99.50000% : 293601.280us 00:08:03.527 99.90000% : 295214.474us 00:08:03.527 99.99000% : 295214.474us 00:08:03.527 99.99900% : 295214.474us 00:08:03.527 99.99990% : 295214.474us 00:08:03.527 99.99999% : 295214.474us 00:08:03.527 00:08:03.527 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:03.527 ================================================================================= 00:08:03.527 1.00000% : 7108.135us 00:08:03.527 10.00000% : 8065.969us 00:08:03.527 25.00000% : 8318.031us 00:08:03.527 50.00000% : 8822.154us 00:08:03.527 75.00000% : 9880.812us 00:08:03.527 90.00000% : 12098.954us 00:08:03.527 95.00000% : 29642.437us 00:08:03.527 98.00000% : 71383.828us 00:08:03.527 99.00000% : 288761.698us 00:08:03.527 99.50000% : 290374.892us 00:08:03.527 99.90000% : 293601.280us 00:08:03.527 99.99000% : 295214.474us 00:08:03.527 99.99900% : 295214.474us 00:08:03.527 99.99990% : 295214.474us 00:08:03.527 99.99999% : 295214.474us 00:08:03.527 00:08:03.527 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:03.527 ================================================================================= 00:08:03.527 1.00000% : 6377.157us 00:08:03.527 10.00000% : 8015.557us 00:08:03.527 25.00000% : 8267.618us 00:08:03.527 50.00000% : 8771.742us 00:08:03.527 75.00000% : 9779.988us 00:08:03.527 90.00000% : 11947.717us 00:08:03.527 95.00000% : 28634.191us 00:08:03.527 98.00000% : 72190.425us 00:08:03.527 99.00000% : 290374.892us 00:08:03.527 99.50000% : 291988.086us 00:08:03.527 99.90000% : 295214.474us 00:08:03.527 99.99000% : 295214.474us 00:08:03.527 99.99900% : 295214.474us 00:08:03.527 99.99990% : 295214.474us 00:08:03.527 99.99999% : 295214.474us 00:08:03.527 00:08:03.527 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:03.527 ================================================================================= 00:08:03.527 1.00000% : 6326.745us 00:08:03.527 10.00000% : 7965.145us 00:08:03.527 25.00000% : 8318.031us 00:08:03.527 50.00000% : 8771.742us 00:08:03.527 75.00000% : 9830.400us 00:08:03.527 90.00000% : 11594.831us 00:08:03.527 95.00000% : 27020.997us 00:08:03.527 98.00000% : 72593.723us 00:08:03.527 99.00000% : 290374.892us 00:08:03.527 99.50000% : 291988.086us 00:08:03.527 99.90000% : 291988.086us 00:08:03.527 99.99000% : 295214.474us 00:08:03.527 99.99900% : 295214.474us 00:08:03.527 99.99990% : 295214.474us 00:08:03.527 99.99999% : 295214.474us 00:08:03.527 00:08:03.527 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:03.527 ================================================================================= 00:08:03.527 1.00000% : 5999.065us 00:08:03.527 10.00000% : 7965.145us 00:08:03.527 25.00000% : 8267.618us 00:08:03.527 50.00000% : 8771.742us 00:08:03.527 75.00000% : 9830.400us 00:08:03.527 90.00000% : 11393.182us 00:08:03.527 95.00000% : 20870.695us 00:08:03.527 98.00000% : 72593.723us 00:08:03.527 99.00000% : 290374.892us 00:08:03.527 99.50000% : 291988.086us 00:08:03.527 99.90000% : 295214.474us 00:08:03.527 99.99000% : 295214.474us 00:08:03.527 99.99900% : 295214.474us 00:08:03.527 99.99990% : 295214.474us 00:08:03.527 99.99999% : 295214.474us 00:08:03.527 00:08:03.527 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:03.527 ============================================================================== 00:08:03.527 Range in us Cumulative IO count 00:08:03.527 5469.735 - 5494.942: 0.0116% ( 1) 00:08:03.527 5494.942 - 5520.148: 0.0578% ( 4) 00:08:03.527 5520.148 - 5545.354: 0.0810% ( 2) 00:08:03.527 5545.354 - 5570.560: 0.1504% ( 6) 00:08:03.527 5570.560 - 5595.766: 0.1967% ( 4) 00:08:03.527 5595.766 - 5620.972: 0.2314% ( 3) 00:08:03.527 5620.972 - 5646.178: 0.2892% ( 5) 00:08:03.527 5646.178 - 5671.385: 0.3008% ( 1) 00:08:03.527 5671.385 - 5696.591: 0.3471% ( 4) 00:08:03.527 5696.591 - 5721.797: 0.3818% ( 3) 00:08:03.527 5721.797 - 5747.003: 0.4396% ( 5) 00:08:03.527 5747.003 - 5772.209: 0.5090% ( 6) 00:08:03.527 5772.209 - 5797.415: 0.5437% ( 3) 00:08:03.527 5797.415 - 5822.622: 0.5669% ( 2) 00:08:03.527 5822.622 - 5847.828: 0.5900% ( 2) 00:08:03.528 5847.828 - 5873.034: 0.6478% ( 5) 00:08:03.528 5873.034 - 5898.240: 0.6941% ( 4) 00:08:03.528 5898.240 - 5923.446: 0.7520% ( 5) 00:08:03.528 5923.446 - 5948.652: 0.7867% ( 3) 00:08:03.528 5948.652 - 5973.858: 0.8214% ( 3) 00:08:03.528 5973.858 - 5999.065: 0.8908% ( 6) 00:08:03.528 5999.065 - 6024.271: 0.9371% ( 4) 00:08:03.528 6024.271 - 6049.477: 0.9833% ( 4) 00:08:03.528 6049.477 - 6074.683: 1.0528% ( 6) 00:08:03.528 6074.683 - 6099.889: 1.2726% ( 19) 00:08:03.528 6099.889 - 6125.095: 1.3188% ( 4) 00:08:03.528 6125.095 - 6150.302: 1.3420% ( 2) 00:08:03.528 6150.302 - 6175.508: 1.3651% ( 2) 00:08:03.528 6175.508 - 6200.714: 1.3882% ( 2) 00:08:03.528 6200.714 - 6225.920: 1.4114% ( 2) 00:08:03.528 6225.920 - 6251.126: 1.4230% ( 1) 00:08:03.528 6251.126 - 6276.332: 1.4461% ( 2) 00:08:03.528 6276.332 - 6301.538: 1.4692% ( 2) 00:08:03.528 6301.538 - 6326.745: 1.4808% ( 1) 00:08:03.528 6326.745 - 6351.951: 1.5039% ( 2) 00:08:03.528 6351.951 - 6377.157: 1.5271% ( 2) 00:08:03.528 6856.074 - 6906.486: 1.5386% ( 1) 00:08:03.528 6906.486 - 6956.898: 1.5733% ( 3) 00:08:03.528 6956.898 - 7007.311: 1.6312% ( 5) 00:08:03.528 7007.311 - 7057.723: 1.7006% ( 6) 00:08:03.528 7057.723 - 7108.135: 1.8394% ( 12) 00:08:03.528 7108.135 - 7158.548: 1.9551% ( 10) 00:08:03.528 7158.548 - 7208.960: 2.1402% ( 16) 00:08:03.528 7208.960 - 7259.372: 2.4988% ( 31) 00:08:03.528 7259.372 - 7309.785: 2.6608% ( 14) 00:08:03.528 7309.785 - 7360.197: 2.9037% ( 21) 00:08:03.528 7360.197 - 7410.609: 3.1814% ( 24) 00:08:03.528 7410.609 - 7461.022: 3.4475% ( 23) 00:08:03.528 7461.022 - 7511.434: 3.6789% ( 20) 00:08:03.528 7511.434 - 7561.846: 3.9796% ( 26) 00:08:03.528 7561.846 - 7612.258: 4.4424% ( 40) 00:08:03.528 7612.258 - 7662.671: 4.8126% ( 32) 00:08:03.528 7662.671 - 7713.083: 5.0902% ( 24) 00:08:03.528 7713.083 - 7763.495: 5.5067% ( 36) 00:08:03.528 7763.495 - 7813.908: 6.1314% ( 54) 00:08:03.528 7813.908 - 7864.320: 7.0222% ( 77) 00:08:03.528 7864.320 - 7914.732: 8.2832% ( 109) 00:08:03.528 7914.732 - 7965.145: 9.3591% ( 93) 00:08:03.528 7965.145 - 8015.557: 11.1870% ( 158) 00:08:03.528 8015.557 - 8065.969: 13.2809% ( 181) 00:08:03.528 8065.969 - 8116.382: 15.2938% ( 174) 00:08:03.528 8116.382 - 8166.794: 17.9199% ( 227) 00:08:03.528 8166.794 - 8217.206: 21.7492% ( 331) 00:08:03.528 8217.206 - 8267.618: 24.5257% ( 240) 00:08:03.528 8267.618 - 8318.031: 28.0194% ( 302) 00:08:03.528 8318.031 - 8368.443: 30.7612% ( 237) 00:08:03.528 8368.443 - 8418.855: 33.7344% ( 257) 00:08:03.528 8418.855 - 8469.268: 35.9440% ( 191) 00:08:03.528 8469.268 - 8519.680: 38.7436% ( 242) 00:08:03.528 8519.680 - 8570.092: 41.6358% ( 250) 00:08:03.528 8570.092 - 8620.505: 43.8801% ( 194) 00:08:03.528 8620.505 - 8670.917: 46.2517% ( 205) 00:08:03.528 8670.917 - 8721.329: 48.6812% ( 210) 00:08:03.528 8721.329 - 8771.742: 51.1337% ( 212) 00:08:03.528 8771.742 - 8822.154: 53.3549% ( 192) 00:08:03.528 8822.154 - 8872.566: 55.9348% ( 223) 00:08:03.528 8872.566 - 8922.978: 57.6816% ( 151) 00:08:03.528 8922.978 - 8973.391: 59.1161% ( 124) 00:08:03.528 8973.391 - 9023.803: 60.6085% ( 129) 00:08:03.528 9023.803 - 9074.215: 62.0777% ( 127) 00:08:03.528 9074.215 - 9124.628: 63.5585% ( 128) 00:08:03.528 9124.628 - 9175.040: 64.8774% ( 114) 00:08:03.528 9175.040 - 9225.452: 66.0805% ( 104) 00:08:03.528 9225.452 - 9275.865: 67.0986% ( 88) 00:08:03.528 9275.865 - 9326.277: 67.8968% ( 69) 00:08:03.528 9326.277 - 9376.689: 68.6719% ( 67) 00:08:03.528 9376.689 - 9427.102: 69.5858% ( 79) 00:08:03.528 9427.102 - 9477.514: 70.2684% ( 59) 00:08:03.528 9477.514 - 9527.926: 71.2402% ( 84) 00:08:03.528 9527.926 - 9578.338: 71.9806% ( 64) 00:08:03.528 9578.338 - 9628.751: 72.6747% ( 60) 00:08:03.528 9628.751 - 9679.163: 73.3110% ( 55) 00:08:03.528 9679.163 - 9729.575: 73.9472% ( 55) 00:08:03.528 9729.575 - 9779.988: 74.5835% ( 55) 00:08:03.528 9779.988 - 9830.400: 75.2198% ( 55) 00:08:03.528 9830.400 - 9880.812: 75.7982% ( 50) 00:08:03.528 9880.812 - 9931.225: 76.5155% ( 62) 00:08:03.528 9931.225 - 9981.637: 77.1055% ( 51) 00:08:03.528 9981.637 - 10032.049: 77.8228% ( 62) 00:08:03.528 10032.049 - 10082.462: 78.4128% ( 51) 00:08:03.528 10082.462 - 10132.874: 78.9102% ( 43) 00:08:03.528 10132.874 - 10183.286: 79.3151% ( 35) 00:08:03.528 10183.286 - 10233.698: 79.7432% ( 37) 00:08:03.528 10233.698 - 10284.111: 80.1481% ( 35) 00:08:03.528 10284.111 - 10334.523: 80.5414% ( 34) 00:08:03.528 10334.523 - 10384.935: 80.8191% ( 24) 00:08:03.528 10384.935 - 10435.348: 81.1199% ( 26) 00:08:03.528 10435.348 - 10485.760: 81.4091% ( 25) 00:08:03.528 10485.760 - 10536.172: 81.7330% ( 28) 00:08:03.528 10536.172 - 10586.585: 82.1495% ( 36) 00:08:03.528 10586.585 - 10636.997: 82.5659% ( 36) 00:08:03.528 10636.997 - 10687.409: 83.0171% ( 39) 00:08:03.528 10687.409 - 10737.822: 83.5261% ( 44) 00:08:03.528 10737.822 - 10788.234: 84.0352% ( 44) 00:08:03.528 10788.234 - 10838.646: 84.4979% ( 40) 00:08:03.528 10838.646 - 10889.058: 84.8450% ( 30) 00:08:03.528 10889.058 - 10939.471: 85.1573% ( 27) 00:08:03.528 10939.471 - 10989.883: 85.5275% ( 32) 00:08:03.528 10989.883 - 11040.295: 85.9556% ( 37) 00:08:03.528 11040.295 - 11090.708: 86.3836% ( 37) 00:08:03.528 11090.708 - 11141.120: 86.7770% ( 34) 00:08:03.528 11141.120 - 11191.532: 87.1587% ( 33) 00:08:03.528 11191.532 - 11241.945: 87.4942% ( 29) 00:08:03.528 11241.945 - 11292.357: 87.9454% ( 39) 00:08:03.528 11292.357 - 11342.769: 88.3272% ( 33) 00:08:03.528 11342.769 - 11393.182: 88.7668% ( 38) 00:08:03.528 11393.182 - 11443.594: 89.0213% ( 22) 00:08:03.528 11443.594 - 11494.006: 89.2758% ( 22) 00:08:03.528 11494.006 - 11544.418: 89.5534% ( 24) 00:08:03.528 11544.418 - 11594.831: 89.8427% ( 25) 00:08:03.528 11594.831 - 11645.243: 90.0972% ( 22) 00:08:03.528 11645.243 - 11695.655: 90.3054% ( 18) 00:08:03.528 11695.655 - 11746.068: 90.4327% ( 11) 00:08:03.528 11746.068 - 11796.480: 90.6293% ( 17) 00:08:03.528 11796.480 - 11846.892: 90.8260% ( 17) 00:08:03.528 11846.892 - 11897.305: 90.9070% ( 7) 00:08:03.528 11897.305 - 11947.717: 90.9648% ( 5) 00:08:03.528 11947.717 - 11998.129: 90.9995% ( 3) 00:08:03.528 11998.129 - 12048.542: 91.0458% ( 4) 00:08:03.528 12048.542 - 12098.954: 91.0921% ( 4) 00:08:03.528 12098.954 - 12149.366: 91.1384% ( 4) 00:08:03.528 12149.366 - 12199.778: 91.1731% ( 3) 00:08:03.528 12199.778 - 12250.191: 91.1962% ( 2) 00:08:03.528 12250.191 - 12300.603: 91.2309% ( 3) 00:08:03.528 12300.603 - 12351.015: 91.3582% ( 11) 00:08:03.528 12351.015 - 12401.428: 91.5086% ( 13) 00:08:03.528 12401.428 - 12451.840: 91.5317% ( 2) 00:08:03.528 12451.840 - 12502.252: 91.5548% ( 2) 00:08:03.528 12502.252 - 12552.665: 91.5664% ( 1) 00:08:03.528 12552.665 - 12603.077: 91.6242% ( 5) 00:08:03.528 12603.077 - 12653.489: 91.6358% ( 1) 00:08:03.528 12653.489 - 12703.902: 91.6590% ( 2) 00:08:03.528 12703.902 - 12754.314: 91.6705% ( 1) 00:08:03.528 12754.314 - 12804.726: 91.6821% ( 1) 00:08:03.528 12804.726 - 12855.138: 91.7052% ( 2) 00:08:03.528 12855.138 - 12905.551: 91.7631% ( 5) 00:08:03.528 12905.551 - 13006.375: 92.0523% ( 25) 00:08:03.528 13006.375 - 13107.200: 92.2258% ( 15) 00:08:03.528 13107.200 - 13208.025: 92.3994% ( 15) 00:08:03.528 13208.025 - 13308.849: 92.5613% ( 14) 00:08:03.528 13308.849 - 13409.674: 92.7117% ( 13) 00:08:03.528 13409.674 - 13510.498: 92.8621% ( 13) 00:08:03.528 13510.498 - 13611.323: 93.0356% ( 15) 00:08:03.528 13611.323 - 13712.148: 93.2207% ( 16) 00:08:03.528 13712.148 - 13812.972: 93.3943% ( 15) 00:08:03.528 13812.972 - 13913.797: 93.5099% ( 10) 00:08:03.528 13913.797 - 14014.622: 93.5794% ( 6) 00:08:03.528 14014.622 - 14115.446: 93.6372% ( 5) 00:08:03.528 14115.446 - 14216.271: 93.6835% ( 4) 00:08:03.528 14216.271 - 14317.095: 93.7413% ( 5) 00:08:03.528 14317.095 - 14417.920: 93.8570% ( 10) 00:08:03.528 14417.920 - 14518.745: 93.9958% ( 12) 00:08:03.528 14518.745 - 14619.569: 94.0537% ( 5) 00:08:03.528 14619.569 - 14720.394: 94.0768% ( 2) 00:08:03.528 27222.646 - 27424.295: 94.0884% ( 1) 00:08:03.528 27424.295 - 27625.945: 94.1231% ( 3) 00:08:03.528 27625.945 - 27827.594: 94.2041% ( 7) 00:08:03.528 27827.594 - 28029.243: 94.3545% ( 13) 00:08:03.528 28029.243 - 28230.892: 94.4586% ( 9) 00:08:03.528 28230.892 - 28432.542: 94.5511% ( 8) 00:08:03.528 28432.542 - 28634.191: 94.6437% ( 8) 00:08:03.528 28634.191 - 28835.840: 94.7362% ( 8) 00:08:03.528 28835.840 - 29037.489: 94.8172% ( 7) 00:08:03.528 31860.578 - 32062.228: 94.8404% ( 2) 00:08:03.528 32062.228 - 32263.877: 95.0255% ( 16) 00:08:03.528 32868.825 - 33070.474: 95.0370% ( 1) 00:08:03.528 33070.474 - 33272.123: 95.6617% ( 54) 00:08:03.528 33272.123 - 33473.772: 95.7658% ( 9) 00:08:03.528 33473.772 - 33675.422: 95.8584% ( 8) 00:08:03.528 33675.422 - 33877.071: 95.9509% ( 8) 00:08:03.528 33877.071 - 34078.720: 96.0551% ( 9) 00:08:03.528 34078.720 - 34280.369: 96.1013% ( 4) 00:08:03.528 34280.369 - 34482.018: 96.1476% ( 4) 00:08:03.528 35691.914 - 35893.563: 96.1708% ( 2) 00:08:03.528 35893.563 - 36095.212: 96.2980% ( 11) 00:08:03.528 36095.212 - 36296.862: 96.3443% ( 4) 00:08:03.528 36498.511 - 36700.160: 96.5062% ( 14) 00:08:03.528 36700.160 - 36901.809: 96.5410% ( 3) 00:08:03.528 39523.249 - 39724.898: 96.7029% ( 14) 00:08:03.528 39724.898 - 39926.548: 96.8417% ( 12) 00:08:03.529 39926.548 - 40128.197: 96.9574% ( 10) 00:08:03.529 40128.197 - 40329.846: 97.0384% ( 7) 00:08:03.529 68560.738 - 68964.037: 97.3045% ( 23) 00:08:03.529 70577.231 - 70980.529: 97.4317% ( 11) 00:08:03.529 70980.529 - 71383.828: 97.7788% ( 30) 00:08:03.529 72190.425 - 72593.723: 97.7904% ( 1) 00:08:03.529 72593.723 - 72997.022: 98.2531% ( 40) 00:08:03.529 75416.812 - 75820.111: 98.4729% ( 19) 00:08:03.529 75820.111 - 76223.409: 98.5192% ( 4) 00:08:03.529 288761.698 - 290374.892: 99.2596% ( 64) 00:08:03.529 290374.892 - 291988.086: 100.0000% ( 64) 00:08:03.529 00:08:03.529 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:03.529 ============================================================================== 00:08:03.529 Range in us Cumulative IO count 00:08:03.529 6503.188 - 6553.600: 0.0230% ( 2) 00:08:03.529 6553.600 - 6604.012: 0.0919% ( 6) 00:08:03.529 6604.012 - 6654.425: 0.1264% ( 3) 00:08:03.529 6654.425 - 6704.837: 0.1723% ( 4) 00:08:03.529 6704.837 - 6755.249: 0.2298% ( 5) 00:08:03.529 6755.249 - 6805.662: 0.3217% ( 8) 00:08:03.529 6805.662 - 6856.074: 0.4596% ( 12) 00:08:03.529 6856.074 - 6906.486: 0.6778% ( 19) 00:08:03.529 6906.486 - 6956.898: 0.8847% ( 18) 00:08:03.529 6956.898 - 7007.311: 1.0225% ( 12) 00:08:03.529 7007.311 - 7057.723: 1.1259% ( 9) 00:08:03.529 7057.723 - 7108.135: 1.1834% ( 5) 00:08:03.529 7108.135 - 7158.548: 1.2063% ( 2) 00:08:03.529 7158.548 - 7208.960: 1.2983% ( 8) 00:08:03.529 7208.960 - 7259.372: 1.4821% ( 16) 00:08:03.529 7259.372 - 7309.785: 1.7348% ( 22) 00:08:03.529 7309.785 - 7360.197: 1.9531% ( 19) 00:08:03.529 7360.197 - 7410.609: 2.1599% ( 18) 00:08:03.529 7410.609 - 7461.022: 2.3782% ( 19) 00:08:03.529 7461.022 - 7511.434: 2.5850% ( 18) 00:08:03.529 7511.434 - 7561.846: 2.9642% ( 33) 00:08:03.529 7561.846 - 7612.258: 3.3088% ( 30) 00:08:03.529 7612.258 - 7662.671: 3.7109% ( 35) 00:08:03.529 7662.671 - 7713.083: 4.4462% ( 64) 00:08:03.529 7713.083 - 7763.495: 5.2390% ( 69) 00:08:03.529 7763.495 - 7813.908: 6.3764% ( 99) 00:08:03.529 7813.908 - 7864.320: 7.7206% ( 117) 00:08:03.529 7864.320 - 7914.732: 9.4669% ( 152) 00:08:03.529 7914.732 - 7965.145: 11.0064% ( 134) 00:08:03.529 7965.145 - 8015.557: 13.0859% ( 181) 00:08:03.529 8015.557 - 8065.969: 15.5101% ( 211) 00:08:03.529 8065.969 - 8116.382: 17.7390% ( 194) 00:08:03.529 8116.382 - 8166.794: 20.1057% ( 206) 00:08:03.529 8166.794 - 8217.206: 22.3575% ( 196) 00:08:03.529 8217.206 - 8267.618: 24.4945% ( 186) 00:08:03.529 8267.618 - 8318.031: 26.4591% ( 171) 00:08:03.529 8318.031 - 8368.443: 29.2739% ( 245) 00:08:03.529 8368.443 - 8418.855: 31.5257% ( 196) 00:08:03.529 8418.855 - 8469.268: 33.7891% ( 197) 00:08:03.529 8469.268 - 8519.680: 36.1443% ( 205) 00:08:03.529 8519.680 - 8570.092: 38.7868% ( 230) 00:08:03.529 8570.092 - 8620.505: 41.7854% ( 261) 00:08:03.529 8620.505 - 8670.917: 44.0028% ( 193) 00:08:03.529 8670.917 - 8721.329: 46.4384% ( 212) 00:08:03.529 8721.329 - 8771.742: 48.2767% ( 160) 00:08:03.529 8771.742 - 8822.154: 50.1379% ( 162) 00:08:03.529 8822.154 - 8872.566: 52.1944% ( 179) 00:08:03.529 8872.566 - 8922.978: 54.4462% ( 196) 00:08:03.529 8922.978 - 8973.391: 56.3534% ( 166) 00:08:03.529 8973.391 - 9023.803: 57.9504% ( 139) 00:08:03.529 9023.803 - 9074.215: 59.3750% ( 124) 00:08:03.529 9074.215 - 9124.628: 60.7652% ( 121) 00:08:03.529 9124.628 - 9175.040: 61.9945% ( 107) 00:08:03.529 9175.040 - 9225.452: 63.2468% ( 109) 00:08:03.529 9225.452 - 9275.865: 64.3842% ( 99) 00:08:03.529 9275.865 - 9326.277: 65.3493% ( 84) 00:08:03.529 9326.277 - 9376.689: 66.3028% ( 83) 00:08:03.529 9376.689 - 9427.102: 67.3024% ( 87) 00:08:03.529 9427.102 - 9477.514: 68.3134% ( 88) 00:08:03.529 9477.514 - 9527.926: 69.3934% ( 94) 00:08:03.529 9527.926 - 9578.338: 70.2321% ( 73) 00:08:03.529 9578.338 - 9628.751: 71.0708% ( 73) 00:08:03.529 9628.751 - 9679.163: 72.1507% ( 94) 00:08:03.529 9679.163 - 9729.575: 73.1043% ( 83) 00:08:03.529 9729.575 - 9779.988: 73.9775% ( 76) 00:08:03.529 9779.988 - 9830.400: 74.6094% ( 55) 00:08:03.529 9830.400 - 9880.812: 75.2068% ( 52) 00:08:03.529 9880.812 - 9931.225: 75.7238% ( 45) 00:08:03.529 9931.225 - 9981.637: 76.1604% ( 38) 00:08:03.529 9981.637 - 10032.049: 76.7463% ( 51) 00:08:03.529 10032.049 - 10082.462: 77.3667% ( 54) 00:08:03.529 10082.462 - 10132.874: 77.9412% ( 50) 00:08:03.529 10132.874 - 10183.286: 78.4007% ( 40) 00:08:03.529 10183.286 - 10233.698: 78.9522% ( 48) 00:08:03.529 10233.698 - 10284.111: 79.5152% ( 49) 00:08:03.529 10284.111 - 10334.523: 80.0551% ( 47) 00:08:03.529 10334.523 - 10384.935: 80.6526% ( 52) 00:08:03.529 10384.935 - 10435.348: 81.2500% ( 52) 00:08:03.529 10435.348 - 10485.760: 81.8474% ( 52) 00:08:03.529 10485.760 - 10536.172: 82.3989% ( 48) 00:08:03.529 10536.172 - 10586.585: 82.8355% ( 38) 00:08:03.529 10586.585 - 10636.997: 83.3180% ( 42) 00:08:03.529 10636.997 - 10687.409: 83.8006% ( 42) 00:08:03.529 10687.409 - 10737.822: 84.1222% ( 28) 00:08:03.529 10737.822 - 10788.234: 84.5244% ( 35) 00:08:03.529 10788.234 - 10838.646: 84.8346% ( 27) 00:08:03.529 10838.646 - 10889.058: 85.0988% ( 23) 00:08:03.529 10889.058 - 10939.471: 85.3516% ( 22) 00:08:03.529 10939.471 - 10989.883: 85.7422% ( 34) 00:08:03.529 10989.883 - 11040.295: 86.0639% ( 28) 00:08:03.529 11040.295 - 11090.708: 86.4890% ( 37) 00:08:03.529 11090.708 - 11141.120: 86.8107% ( 28) 00:08:03.529 11141.120 - 11191.532: 87.1783% ( 32) 00:08:03.529 11191.532 - 11241.945: 87.4081% ( 20) 00:08:03.529 11241.945 - 11292.357: 87.6264% ( 19) 00:08:03.529 11292.357 - 11342.769: 87.7872% ( 14) 00:08:03.529 11342.769 - 11393.182: 88.0400% ( 22) 00:08:03.529 11393.182 - 11443.594: 88.2927% ( 22) 00:08:03.529 11443.594 - 11494.006: 88.4881% ( 17) 00:08:03.529 11494.006 - 11544.418: 88.6604% ( 15) 00:08:03.529 11544.418 - 11594.831: 88.8902% ( 20) 00:08:03.529 11594.831 - 11645.243: 89.0740% ( 16) 00:08:03.529 11645.243 - 11695.655: 89.2693% ( 17) 00:08:03.529 11695.655 - 11746.068: 89.4531% ( 16) 00:08:03.529 11746.068 - 11796.480: 89.6140% ( 14) 00:08:03.529 11796.480 - 11846.892: 89.8093% ( 17) 00:08:03.529 11846.892 - 11897.305: 90.0161% ( 18) 00:08:03.529 11897.305 - 11947.717: 90.3033% ( 25) 00:08:03.529 11947.717 - 11998.129: 90.5101% ( 18) 00:08:03.529 11998.129 - 12048.542: 90.6250% ( 10) 00:08:03.529 12048.542 - 12098.954: 90.8318% ( 18) 00:08:03.529 12098.954 - 12149.366: 91.0156% ( 16) 00:08:03.529 12149.366 - 12199.778: 91.0616% ( 4) 00:08:03.529 12199.778 - 12250.191: 91.1305% ( 6) 00:08:03.529 12250.191 - 12300.603: 91.2224% ( 8) 00:08:03.529 12300.603 - 12351.015: 91.3258% ( 9) 00:08:03.529 12351.015 - 12401.428: 91.4177% ( 8) 00:08:03.529 12401.428 - 12451.840: 91.4867% ( 6) 00:08:03.529 12451.840 - 12502.252: 91.6016% ( 10) 00:08:03.529 12502.252 - 12552.665: 91.6820% ( 7) 00:08:03.529 12552.665 - 12603.077: 91.8084% ( 11) 00:08:03.529 12603.077 - 12653.489: 91.9118% ( 9) 00:08:03.529 12653.489 - 12703.902: 92.0381% ( 11) 00:08:03.529 12703.902 - 12754.314: 92.0956% ( 5) 00:08:03.529 12754.314 - 12804.726: 92.1645% ( 6) 00:08:03.529 12804.726 - 12855.138: 92.2220% ( 5) 00:08:03.529 12855.138 - 12905.551: 92.2679% ( 4) 00:08:03.529 12905.551 - 13006.375: 92.3598% ( 8) 00:08:03.529 13006.375 - 13107.200: 92.4862% ( 11) 00:08:03.529 13107.200 - 13208.025: 92.6585% ( 15) 00:08:03.529 13208.025 - 13308.849: 92.7734% ( 10) 00:08:03.529 13308.849 - 13409.674: 92.9343% ( 14) 00:08:03.529 13409.674 - 13510.498: 93.0492% ( 10) 00:08:03.529 13510.498 - 13611.323: 93.1641% ( 10) 00:08:03.529 13611.323 - 13712.148: 93.3134% ( 13) 00:08:03.529 13712.148 - 13812.972: 93.3938% ( 7) 00:08:03.529 13812.972 - 13913.797: 93.5317% ( 12) 00:08:03.529 13913.797 - 14014.622: 93.6006% ( 6) 00:08:03.529 14014.622 - 14115.446: 93.7270% ( 11) 00:08:03.529 14115.446 - 14216.271: 93.8189% ( 8) 00:08:03.529 14216.271 - 14317.095: 93.9798% ( 14) 00:08:03.529 14417.920 - 14518.745: 93.9913% ( 1) 00:08:03.529 14518.745 - 14619.569: 94.0372% ( 4) 00:08:03.529 14619.569 - 14720.394: 94.0832% ( 4) 00:08:03.529 14720.394 - 14821.218: 94.1176% ( 3) 00:08:03.529 25407.803 - 25508.628: 94.1521% ( 3) 00:08:03.529 25508.628 - 25609.452: 94.1636% ( 1) 00:08:03.529 25609.452 - 25710.277: 94.2210% ( 5) 00:08:03.529 25710.277 - 25811.102: 94.2670% ( 4) 00:08:03.529 25811.102 - 26012.751: 94.3474% ( 7) 00:08:03.529 26012.751 - 26214.400: 94.4278% ( 7) 00:08:03.529 26214.400 - 26416.049: 94.5198% ( 8) 00:08:03.529 26416.049 - 26617.698: 94.6117% ( 8) 00:08:03.529 26617.698 - 26819.348: 94.6921% ( 7) 00:08:03.529 26819.348 - 27020.997: 94.7725% ( 7) 00:08:03.529 27020.997 - 27222.646: 94.8529% ( 7) 00:08:03.529 29239.138 - 29440.788: 94.8989% ( 4) 00:08:03.529 30852.332 - 31053.982: 94.9104% ( 1) 00:08:03.529 31053.982 - 31255.631: 94.9678% ( 5) 00:08:03.529 31255.631 - 31457.280: 95.0712% ( 9) 00:08:03.529 31457.280 - 31658.929: 95.2665% ( 17) 00:08:03.529 31658.929 - 31860.578: 95.8180% ( 48) 00:08:03.529 31860.578 - 32062.228: 95.9444% ( 11) 00:08:03.529 32062.228 - 32263.877: 96.0823% ( 12) 00:08:03.529 32263.877 - 32465.526: 96.1627% ( 7) 00:08:03.529 32465.526 - 32667.175: 96.2431% ( 7) 00:08:03.529 32667.175 - 32868.825: 96.3235% ( 7) 00:08:03.529 32868.825 - 33070.474: 96.3350% ( 1) 00:08:03.529 33070.474 - 33272.123: 96.6222% ( 25) 00:08:03.530 33272.123 - 33473.772: 96.9784% ( 31) 00:08:03.530 33473.772 - 33675.422: 97.0129% ( 3) 00:08:03.530 35893.563 - 36095.212: 97.0588% ( 4) 00:08:03.530 64527.754 - 64931.052: 97.2197% ( 14) 00:08:03.530 64931.052 - 65334.351: 97.7941% ( 50) 00:08:03.530 72997.022 - 73400.320: 97.8056% ( 1) 00:08:03.530 73400.320 - 73803.618: 98.1043% ( 26) 00:08:03.530 73803.618 - 74206.917: 98.3915% ( 25) 00:08:03.530 74206.917 - 74610.215: 98.5294% ( 12) 00:08:03.530 283922.117 - 285535.311: 99.2647% ( 64) 00:08:03.530 291988.086 - 293601.280: 99.8851% ( 54) 00:08:03.530 293601.280 - 295214.474: 100.0000% ( 10) 00:08:03.530 00:08:03.530 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:03.530 ============================================================================== 00:08:03.530 Range in us Cumulative IO count 00:08:03.530 6654.425 - 6704.837: 0.0574% ( 5) 00:08:03.530 6704.837 - 6755.249: 0.1264% ( 6) 00:08:03.530 6755.249 - 6805.662: 0.1953% ( 6) 00:08:03.530 6805.662 - 6856.074: 0.2757% ( 7) 00:08:03.530 6856.074 - 6906.486: 0.3447% ( 6) 00:08:03.530 6906.486 - 6956.898: 0.5859% ( 21) 00:08:03.530 6956.898 - 7007.311: 0.7468% ( 14) 00:08:03.530 7007.311 - 7057.723: 0.9881% ( 21) 00:08:03.530 7057.723 - 7108.135: 1.2753% ( 25) 00:08:03.530 7108.135 - 7158.548: 1.5395% ( 23) 00:08:03.530 7158.548 - 7208.960: 1.7693% ( 20) 00:08:03.530 7208.960 - 7259.372: 1.9761% ( 18) 00:08:03.530 7259.372 - 7309.785: 2.1599% ( 16) 00:08:03.530 7309.785 - 7360.197: 2.2978% ( 12) 00:08:03.530 7360.197 - 7410.609: 2.4472% ( 13) 00:08:03.530 7410.609 - 7461.022: 2.5506% ( 9) 00:08:03.530 7461.022 - 7511.434: 2.7459% ( 17) 00:08:03.530 7511.434 - 7561.846: 2.9412% ( 17) 00:08:03.530 7561.846 - 7612.258: 3.1595% ( 19) 00:08:03.530 7612.258 - 7662.671: 3.5156% ( 31) 00:08:03.530 7662.671 - 7713.083: 3.8948% ( 33) 00:08:03.530 7713.083 - 7763.495: 4.5726% ( 59) 00:08:03.530 7763.495 - 7813.908: 5.1700% ( 52) 00:08:03.530 7813.908 - 7864.320: 5.9972% ( 72) 00:08:03.530 7864.320 - 7914.732: 6.9508% ( 83) 00:08:03.530 7914.732 - 7965.145: 8.2491% ( 113) 00:08:03.530 7965.145 - 8015.557: 9.8920% ( 143) 00:08:03.530 8015.557 - 8065.969: 11.8222% ( 168) 00:08:03.530 8065.969 - 8116.382: 13.9936% ( 189) 00:08:03.530 8116.382 - 8166.794: 16.7509% ( 240) 00:08:03.530 8166.794 - 8217.206: 19.4049% ( 231) 00:08:03.530 8217.206 - 8267.618: 22.4494% ( 265) 00:08:03.530 8267.618 - 8318.031: 25.4596% ( 262) 00:08:03.530 8318.031 - 8368.443: 28.5156% ( 266) 00:08:03.530 8368.443 - 8418.855: 31.8934% ( 294) 00:08:03.530 8418.855 - 8469.268: 34.9724% ( 268) 00:08:03.530 8469.268 - 8519.680: 37.7987% ( 246) 00:08:03.530 8519.680 - 8570.092: 40.3608% ( 223) 00:08:03.530 8570.092 - 8620.505: 42.6930% ( 203) 00:08:03.530 8620.505 - 8670.917: 45.1631% ( 215) 00:08:03.530 8670.917 - 8721.329: 47.2886% ( 185) 00:08:03.530 8721.329 - 8771.742: 49.5060% ( 193) 00:08:03.530 8771.742 - 8822.154: 51.7463% ( 195) 00:08:03.530 8822.154 - 8872.566: 53.8603% ( 184) 00:08:03.530 8872.566 - 8922.978: 55.6756% ( 158) 00:08:03.530 8922.978 - 8973.391: 57.3415% ( 145) 00:08:03.530 8973.391 - 9023.803: 58.8350% ( 130) 00:08:03.530 9023.803 - 9074.215: 60.1333% ( 113) 00:08:03.530 9074.215 - 9124.628: 61.6958% ( 136) 00:08:03.530 9124.628 - 9175.040: 62.8332% ( 99) 00:08:03.530 9175.040 - 9225.452: 63.9591% ( 98) 00:08:03.530 9225.452 - 9275.865: 65.1310% ( 102) 00:08:03.530 9275.865 - 9326.277: 66.3143% ( 103) 00:08:03.530 9326.277 - 9376.689: 67.4517% ( 99) 00:08:03.530 9376.689 - 9427.102: 68.3019% ( 74) 00:08:03.530 9427.102 - 9477.514: 69.2440% ( 82) 00:08:03.530 9477.514 - 9527.926: 70.3585% ( 97) 00:08:03.530 9527.926 - 9578.338: 71.2546% ( 78) 00:08:03.530 9578.338 - 9628.751: 72.0358% ( 68) 00:08:03.530 9628.751 - 9679.163: 72.8056% ( 67) 00:08:03.530 9679.163 - 9729.575: 73.4835% ( 59) 00:08:03.530 9729.575 - 9779.988: 74.1383% ( 57) 00:08:03.530 9779.988 - 9830.400: 74.8047% ( 58) 00:08:03.530 9830.400 - 9880.812: 75.5630% ( 66) 00:08:03.530 9880.812 - 9931.225: 76.2063% ( 56) 00:08:03.530 9931.225 - 9981.637: 76.8727% ( 58) 00:08:03.530 9981.637 - 10032.049: 77.4012% ( 46) 00:08:03.530 10032.049 - 10082.462: 78.1710% ( 67) 00:08:03.530 10082.462 - 10132.874: 78.7914% ( 54) 00:08:03.530 10132.874 - 10183.286: 79.2739% ( 42) 00:08:03.530 10183.286 - 10233.698: 79.7335% ( 40) 00:08:03.530 10233.698 - 10284.111: 80.2045% ( 41) 00:08:03.530 10284.111 - 10334.523: 80.7215% ( 45) 00:08:03.530 10334.523 - 10384.935: 81.2385% ( 45) 00:08:03.530 10384.935 - 10435.348: 81.6866% ( 39) 00:08:03.530 10435.348 - 10485.760: 82.2151% ( 46) 00:08:03.530 10485.760 - 10536.172: 82.6172% ( 35) 00:08:03.530 10536.172 - 10586.585: 83.0653% ( 39) 00:08:03.530 10586.585 - 10636.997: 83.5823% ( 45) 00:08:03.530 10636.997 - 10687.409: 84.0074% ( 37) 00:08:03.530 10687.409 - 10737.822: 84.6737% ( 58) 00:08:03.530 10737.822 - 10788.234: 84.9954% ( 28) 00:08:03.530 10788.234 - 10838.646: 85.3975% ( 35) 00:08:03.530 10838.646 - 10889.058: 85.7996% ( 35) 00:08:03.530 10889.058 - 10939.471: 86.1098% ( 27) 00:08:03.530 10939.471 - 10989.883: 86.4775% ( 32) 00:08:03.530 10989.883 - 11040.295: 86.7532% ( 24) 00:08:03.530 11040.295 - 11090.708: 86.9830% ( 20) 00:08:03.530 11090.708 - 11141.120: 87.1898% ( 18) 00:08:03.530 11141.120 - 11191.532: 87.4081% ( 19) 00:08:03.530 11191.532 - 11241.945: 87.5574% ( 13) 00:08:03.530 11241.945 - 11292.357: 87.6723% ( 10) 00:08:03.530 11292.357 - 11342.769: 87.7298% ( 5) 00:08:03.530 11342.769 - 11393.182: 87.8217% ( 8) 00:08:03.530 11393.182 - 11443.594: 87.9366% ( 10) 00:08:03.530 11443.594 - 11494.006: 88.0285% ( 8) 00:08:03.530 11494.006 - 11544.418: 88.1778% ( 13) 00:08:03.530 11544.418 - 11594.831: 88.3617% ( 16) 00:08:03.530 11594.831 - 11645.243: 88.5340% ( 15) 00:08:03.530 11645.243 - 11695.655: 88.7293% ( 17) 00:08:03.530 11695.655 - 11746.068: 88.8787% ( 13) 00:08:03.530 11746.068 - 11796.480: 89.1085% ( 20) 00:08:03.530 11796.480 - 11846.892: 89.2119% ( 9) 00:08:03.530 11846.892 - 11897.305: 89.3267% ( 10) 00:08:03.530 11897.305 - 11947.717: 89.4531% ( 11) 00:08:03.530 11947.717 - 11998.129: 89.6140% ( 14) 00:08:03.530 11998.129 - 12048.542: 89.7978% ( 16) 00:08:03.530 12048.542 - 12098.954: 90.0046% ( 18) 00:08:03.530 12098.954 - 12149.366: 90.1769% ( 15) 00:08:03.530 12149.366 - 12199.778: 90.4412% ( 23) 00:08:03.530 12199.778 - 12250.191: 90.6020% ( 14) 00:08:03.530 12250.191 - 12300.603: 90.7514% ( 13) 00:08:03.530 12300.603 - 12351.015: 90.9007% ( 13) 00:08:03.530 12351.015 - 12401.428: 91.0846% ( 16) 00:08:03.530 12401.428 - 12451.840: 91.2569% ( 15) 00:08:03.530 12451.840 - 12502.252: 91.3718% ( 10) 00:08:03.530 12502.252 - 12552.665: 91.4752% ( 9) 00:08:03.530 12552.665 - 12603.077: 91.6016% ( 11) 00:08:03.530 12603.077 - 12653.489: 91.7165% ( 10) 00:08:03.530 12653.489 - 12703.902: 91.8543% ( 12) 00:08:03.530 12703.902 - 12754.314: 92.0037% ( 13) 00:08:03.530 12754.314 - 12804.726: 92.1875% ( 16) 00:08:03.530 12804.726 - 12855.138: 92.3369% ( 13) 00:08:03.530 12855.138 - 12905.551: 92.4977% ( 14) 00:08:03.530 12905.551 - 13006.375: 92.7275% ( 20) 00:08:03.530 13006.375 - 13107.200: 92.8424% ( 10) 00:08:03.530 13107.200 - 13208.025: 92.8998% ( 5) 00:08:03.530 13208.025 - 13308.849: 93.0032% ( 9) 00:08:03.530 13308.849 - 13409.674: 93.3249% ( 28) 00:08:03.530 13409.674 - 13510.498: 93.6121% ( 25) 00:08:03.530 13510.498 - 13611.323: 93.7155% ( 9) 00:08:03.530 13611.323 - 13712.148: 93.8419% ( 11) 00:08:03.530 13712.148 - 13812.972: 93.9453% ( 9) 00:08:03.530 13812.972 - 13913.797: 94.0602% ( 10) 00:08:03.530 13913.797 - 14014.622: 94.1062% ( 4) 00:08:03.530 14014.622 - 14115.446: 94.1176% ( 1) 00:08:03.530 23592.960 - 23693.785: 94.1291% ( 1) 00:08:03.530 23693.785 - 23794.609: 94.1751% ( 4) 00:08:03.530 23794.609 - 23895.434: 94.2210% ( 4) 00:08:03.530 23895.434 - 23996.258: 94.2670% ( 4) 00:08:03.530 23996.258 - 24097.083: 94.3130% ( 4) 00:08:03.530 24097.083 - 24197.908: 94.3589% ( 4) 00:08:03.530 24197.908 - 24298.732: 94.4049% ( 4) 00:08:03.530 24298.732 - 24399.557: 94.4508% ( 4) 00:08:03.530 24399.557 - 24500.382: 94.4968% ( 4) 00:08:03.530 24500.382 - 24601.206: 94.5427% ( 4) 00:08:03.530 24601.206 - 24702.031: 94.5887% ( 4) 00:08:03.530 24702.031 - 24802.855: 94.6347% ( 4) 00:08:03.530 24802.855 - 24903.680: 94.6921% ( 5) 00:08:03.530 24903.680 - 25004.505: 94.7381% ( 4) 00:08:03.530 25004.505 - 25105.329: 94.7840% ( 4) 00:08:03.530 25105.329 - 25206.154: 94.8300% ( 4) 00:08:03.530 25206.154 - 25306.978: 94.8529% ( 2) 00:08:03.530 29239.138 - 29440.788: 94.9449% ( 8) 00:08:03.530 29440.788 - 29642.437: 95.0253% ( 7) 00:08:03.530 29642.437 - 29844.086: 95.1172% ( 8) 00:08:03.530 29844.086 - 30045.735: 95.2091% ( 8) 00:08:03.530 30045.735 - 30247.385: 95.3010% ( 8) 00:08:03.530 30247.385 - 30449.034: 95.4044% ( 9) 00:08:03.530 30449.034 - 30650.683: 95.4963% ( 8) 00:08:03.530 30650.683 - 30852.332: 95.5882% ( 8) 00:08:03.530 31658.929 - 31860.578: 96.2661% ( 59) 00:08:03.530 32062.228 - 32263.877: 96.3235% ( 5) 00:08:03.530 33070.474 - 33272.123: 96.3350% ( 1) 00:08:03.530 33272.123 - 33473.772: 97.0588% ( 63) 00:08:03.530 65737.649 - 66140.948: 97.3690% ( 27) 00:08:03.530 67754.142 - 68157.440: 97.7941% ( 37) 00:08:03.530 70980.529 - 71383.828: 98.1733% ( 33) 00:08:03.530 71383.828 - 71787.126: 98.2192% ( 4) 00:08:03.531 73803.618 - 74206.917: 98.3226% ( 9) 00:08:03.531 74206.917 - 74610.215: 98.5294% ( 18) 00:08:03.531 283922.117 - 285535.311: 98.8511% ( 28) 00:08:03.531 287148.505 - 288761.698: 99.2647% ( 36) 00:08:03.531 288761.698 - 290374.892: 99.6668% ( 35) 00:08:03.531 290374.892 - 291988.086: 99.6783% ( 1) 00:08:03.531 291988.086 - 293601.280: 99.9426% ( 23) 00:08:03.531 293601.280 - 295214.474: 100.0000% ( 5) 00:08:03.531 00:08:03.531 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:03.531 ============================================================================== 00:08:03.531 Range in us Cumulative IO count 00:08:03.531 5923.446 - 5948.652: 0.0345% ( 3) 00:08:03.531 5948.652 - 5973.858: 0.0689% ( 3) 00:08:03.531 5973.858 - 5999.065: 0.0919% ( 2) 00:08:03.531 5999.065 - 6024.271: 0.1379% ( 4) 00:08:03.531 6024.271 - 6049.477: 0.1723% ( 3) 00:08:03.531 6049.477 - 6074.683: 0.1838% ( 1) 00:08:03.531 6074.683 - 6099.889: 0.2183% ( 3) 00:08:03.531 6099.889 - 6125.095: 0.2642% ( 4) 00:08:03.531 6125.095 - 6150.302: 0.2987% ( 3) 00:08:03.531 6150.302 - 6175.508: 0.4596% ( 14) 00:08:03.531 6175.508 - 6200.714: 0.5515% ( 8) 00:08:03.531 6200.714 - 6225.920: 0.5859% ( 3) 00:08:03.531 6225.920 - 6251.126: 0.6549% ( 6) 00:08:03.531 6251.126 - 6276.332: 0.7353% ( 7) 00:08:03.531 6276.332 - 6301.538: 0.8157% ( 7) 00:08:03.531 6301.538 - 6326.745: 0.8961% ( 7) 00:08:03.531 6326.745 - 6351.951: 0.9881% ( 8) 00:08:03.531 6351.951 - 6377.157: 1.1719% ( 16) 00:08:03.531 6377.157 - 6402.363: 1.2638% ( 8) 00:08:03.531 6402.363 - 6427.569: 1.3787% ( 10) 00:08:03.531 6427.569 - 6452.775: 1.5625% ( 16) 00:08:03.531 6452.775 - 6503.188: 1.7693% ( 18) 00:08:03.531 6503.188 - 6553.600: 1.9187% ( 13) 00:08:03.531 6553.600 - 6604.012: 2.0335% ( 10) 00:08:03.531 6604.012 - 6654.425: 2.1025% ( 6) 00:08:03.531 6654.425 - 6704.837: 2.1829% ( 7) 00:08:03.531 6704.837 - 6755.249: 2.2059% ( 2) 00:08:03.531 7158.548 - 7208.960: 2.2403% ( 3) 00:08:03.531 7208.960 - 7259.372: 2.2978% ( 5) 00:08:03.531 7259.372 - 7309.785: 2.3667% ( 6) 00:08:03.531 7309.785 - 7360.197: 2.4931% ( 11) 00:08:03.531 7360.197 - 7410.609: 2.7114% ( 19) 00:08:03.531 7410.609 - 7461.022: 2.9986% ( 25) 00:08:03.531 7461.022 - 7511.434: 3.2629% ( 23) 00:08:03.531 7511.434 - 7561.846: 3.5156% ( 22) 00:08:03.531 7561.846 - 7612.258: 3.8143% ( 26) 00:08:03.531 7612.258 - 7662.671: 4.2394% ( 37) 00:08:03.531 7662.671 - 7713.083: 4.7794% ( 47) 00:08:03.531 7713.083 - 7763.495: 5.2619% ( 42) 00:08:03.531 7763.495 - 7813.908: 5.9743% ( 62) 00:08:03.531 7813.908 - 7864.320: 6.8589% ( 77) 00:08:03.531 7864.320 - 7914.732: 7.7895% ( 81) 00:08:03.531 7914.732 - 7965.145: 9.0533% ( 110) 00:08:03.531 7965.145 - 8015.557: 10.6388% ( 138) 00:08:03.531 8015.557 - 8065.969: 12.9251% ( 199) 00:08:03.531 8065.969 - 8116.382: 15.9812% ( 266) 00:08:03.531 8116.382 - 8166.794: 18.9568% ( 259) 00:08:03.531 8166.794 - 8217.206: 22.1852% ( 281) 00:08:03.531 8217.206 - 8267.618: 25.0230% ( 247) 00:08:03.531 8267.618 - 8318.031: 28.2973% ( 285) 00:08:03.531 8318.031 - 8368.443: 31.6176% ( 289) 00:08:03.531 8368.443 - 8418.855: 34.6622% ( 265) 00:08:03.531 8418.855 - 8469.268: 37.2932% ( 229) 00:08:03.531 8469.268 - 8519.680: 40.0850% ( 243) 00:08:03.531 8519.680 - 8570.092: 42.6930% ( 227) 00:08:03.531 8570.092 - 8620.505: 45.1172% ( 211) 00:08:03.531 8620.505 - 8670.917: 46.9210% ( 157) 00:08:03.531 8670.917 - 8721.329: 48.7017% ( 155) 00:08:03.531 8721.329 - 8771.742: 50.6204% ( 167) 00:08:03.531 8771.742 - 8822.154: 52.2863% ( 145) 00:08:03.531 8822.154 - 8872.566: 53.7914% ( 131) 00:08:03.531 8872.566 - 8922.978: 55.2619% ( 128) 00:08:03.531 8922.978 - 8973.391: 56.7785% ( 132) 00:08:03.531 8973.391 - 9023.803: 58.3525% ( 137) 00:08:03.531 9023.803 - 9074.215: 59.7082% ( 118) 00:08:03.531 9074.215 - 9124.628: 61.0179% ( 114) 00:08:03.531 9124.628 - 9175.040: 62.1553% ( 99) 00:08:03.531 9175.040 - 9225.452: 63.3847% ( 107) 00:08:03.531 9225.452 - 9275.865: 64.5565% ( 102) 00:08:03.531 9275.865 - 9326.277: 65.9237% ( 119) 00:08:03.531 9326.277 - 9376.689: 67.1875% ( 110) 00:08:03.531 9376.689 - 9427.102: 68.3019% ( 97) 00:08:03.531 9427.102 - 9477.514: 69.5083% ( 105) 00:08:03.531 9477.514 - 9527.926: 70.5078% ( 87) 00:08:03.531 9527.926 - 9578.338: 71.5418% ( 90) 00:08:03.531 9578.338 - 9628.751: 72.5643% ( 89) 00:08:03.531 9628.751 - 9679.163: 73.5639% ( 87) 00:08:03.531 9679.163 - 9729.575: 74.6094% ( 91) 00:08:03.531 9729.575 - 9779.988: 75.4136% ( 70) 00:08:03.531 9779.988 - 9830.400: 76.0225% ( 53) 00:08:03.531 9830.400 - 9880.812: 76.6085% ( 51) 00:08:03.531 9880.812 - 9931.225: 77.1255% ( 45) 00:08:03.531 9931.225 - 9981.637: 77.9642% ( 73) 00:08:03.531 9981.637 - 10032.049: 78.5501% ( 51) 00:08:03.531 10032.049 - 10082.462: 79.1245% ( 50) 00:08:03.531 10082.462 - 10132.874: 79.7679% ( 56) 00:08:03.531 10132.874 - 10183.286: 80.3883% ( 54) 00:08:03.531 10183.286 - 10233.698: 80.9628% ( 50) 00:08:03.531 10233.698 - 10284.111: 81.5947% ( 55) 00:08:03.531 10284.111 - 10334.523: 82.1117% ( 45) 00:08:03.531 10334.523 - 10384.935: 82.4334% ( 28) 00:08:03.531 10384.935 - 10435.348: 82.7665% ( 29) 00:08:03.531 10435.348 - 10485.760: 83.1112% ( 30) 00:08:03.531 10485.760 - 10536.172: 83.3984% ( 25) 00:08:03.531 10536.172 - 10586.585: 83.6167% ( 19) 00:08:03.531 10586.585 - 10636.997: 83.9384% ( 28) 00:08:03.531 10636.997 - 10687.409: 84.4554% ( 45) 00:08:03.531 10687.409 - 10737.822: 84.9724% ( 45) 00:08:03.531 10737.822 - 10788.234: 85.4779% ( 44) 00:08:03.531 10788.234 - 10838.646: 85.8915% ( 36) 00:08:03.531 10838.646 - 10889.058: 86.3511% ( 40) 00:08:03.531 10889.058 - 10939.471: 86.6958% ( 30) 00:08:03.531 10939.471 - 10989.883: 86.9026% ( 18) 00:08:03.531 10989.883 - 11040.295: 87.1094% ( 18) 00:08:03.531 11040.295 - 11090.708: 87.2817% ( 15) 00:08:03.531 11090.708 - 11141.120: 87.4655% ( 16) 00:08:03.531 11141.120 - 11191.532: 87.6379% ( 15) 00:08:03.531 11191.532 - 11241.945: 87.8102% ( 15) 00:08:03.531 11241.945 - 11292.357: 87.9710% ( 14) 00:08:03.531 11292.357 - 11342.769: 88.0859% ( 10) 00:08:03.531 11342.769 - 11393.182: 88.4536% ( 32) 00:08:03.531 11393.182 - 11443.594: 88.6259% ( 15) 00:08:03.531 11443.594 - 11494.006: 88.7178% ( 8) 00:08:03.531 11494.006 - 11544.418: 88.8672% ( 13) 00:08:03.531 11544.418 - 11594.831: 89.0051% ( 12) 00:08:03.531 11594.831 - 11645.243: 89.1544% ( 13) 00:08:03.531 11645.243 - 11695.655: 89.3267% ( 15) 00:08:03.531 11695.655 - 11746.068: 89.4876% ( 14) 00:08:03.531 11746.068 - 11796.480: 89.6829% ( 17) 00:08:03.531 11796.480 - 11846.892: 89.8323% ( 13) 00:08:03.531 11846.892 - 11897.305: 89.9242% ( 8) 00:08:03.531 11897.305 - 11947.717: 90.0276% ( 9) 00:08:03.531 11947.717 - 11998.129: 90.1654% ( 12) 00:08:03.531 11998.129 - 12048.542: 90.3033% ( 12) 00:08:03.531 12048.542 - 12098.954: 90.4527% ( 13) 00:08:03.531 12098.954 - 12149.366: 90.5446% ( 8) 00:08:03.531 12149.366 - 12199.778: 90.6365% ( 8) 00:08:03.531 12199.778 - 12250.191: 90.7284% ( 8) 00:08:03.531 12250.191 - 12300.603: 90.8318% ( 9) 00:08:03.531 12300.603 - 12351.015: 90.9007% ( 6) 00:08:03.531 12351.015 - 12401.428: 91.0041% ( 9) 00:08:03.531 12401.428 - 12451.840: 91.0960% ( 8) 00:08:03.531 12451.840 - 12502.252: 91.1994% ( 9) 00:08:03.531 12502.252 - 12552.665: 91.4177% ( 19) 00:08:03.531 12552.665 - 12603.077: 91.5097% ( 8) 00:08:03.531 12603.077 - 12653.489: 91.6016% ( 8) 00:08:03.531 12653.489 - 12703.902: 91.6820% ( 7) 00:08:03.531 12703.902 - 12754.314: 91.7509% ( 6) 00:08:03.531 12754.314 - 12804.726: 91.8543% ( 9) 00:08:03.531 12804.726 - 12855.138: 92.0037% ( 13) 00:08:03.531 12855.138 - 12905.551: 92.1186% ( 10) 00:08:03.531 12905.551 - 13006.375: 92.3369% ( 19) 00:08:03.531 13006.375 - 13107.200: 92.6471% ( 27) 00:08:03.531 13107.200 - 13208.025: 92.9113% ( 23) 00:08:03.531 13208.025 - 13308.849: 93.1870% ( 24) 00:08:03.531 13308.849 - 13409.674: 93.3364% ( 13) 00:08:03.531 13409.674 - 13510.498: 93.5432% ( 18) 00:08:03.531 13510.498 - 13611.323: 93.7385% ( 17) 00:08:03.531 13611.323 - 13712.148: 93.8419% ( 9) 00:08:03.531 13712.148 - 13812.972: 93.8764% ( 3) 00:08:03.531 13812.972 - 13913.797: 93.9108% ( 3) 00:08:03.531 13913.797 - 14014.622: 93.9453% ( 3) 00:08:03.531 14014.622 - 14115.446: 93.9913% ( 4) 00:08:03.531 14115.446 - 14216.271: 94.0372% ( 4) 00:08:03.531 14216.271 - 14317.095: 94.0717% ( 3) 00:08:03.531 14317.095 - 14417.920: 94.1176% ( 4) 00:08:03.531 22584.714 - 22685.538: 94.1636% ( 4) 00:08:03.531 22685.538 - 22786.363: 94.2096% ( 4) 00:08:03.531 22786.363 - 22887.188: 94.2440% ( 3) 00:08:03.531 22887.188 - 22988.012: 94.2900% ( 4) 00:08:03.531 22988.012 - 23088.837: 94.3359% ( 4) 00:08:03.531 23088.837 - 23189.662: 94.3819% ( 4) 00:08:03.531 23189.662 - 23290.486: 94.4278% ( 4) 00:08:03.531 23290.486 - 23391.311: 94.4738% ( 4) 00:08:03.531 23391.311 - 23492.135: 94.5198% ( 4) 00:08:03.531 23492.135 - 23592.960: 94.5657% ( 4) 00:08:03.531 23592.960 - 23693.785: 94.6002% ( 3) 00:08:03.531 23693.785 - 23794.609: 94.6576% ( 5) 00:08:03.531 23794.609 - 23895.434: 94.7036% ( 4) 00:08:03.531 23895.434 - 23996.258: 94.7495% ( 4) 00:08:03.531 23996.258 - 24097.083: 94.7955% ( 4) 00:08:03.531 24097.083 - 24197.908: 94.8415% ( 4) 00:08:03.532 24197.908 - 24298.732: 94.8529% ( 1) 00:08:03.532 28230.892 - 28432.542: 94.9219% ( 6) 00:08:03.532 28432.542 - 28634.191: 95.0253% ( 9) 00:08:03.532 28634.191 - 28835.840: 95.1172% ( 8) 00:08:03.532 28835.840 - 29037.489: 95.2091% ( 8) 00:08:03.532 29037.489 - 29239.138: 95.3010% ( 8) 00:08:03.532 29239.138 - 29440.788: 95.3929% ( 8) 00:08:03.532 29440.788 - 29642.437: 95.4848% ( 8) 00:08:03.532 29642.437 - 29844.086: 95.5882% ( 9) 00:08:03.795 32263.877 - 32465.526: 95.6342% ( 4) 00:08:03.795 32465.526 - 32667.175: 96.3235% ( 60) 00:08:03.795 33473.772 - 33675.422: 96.3350% ( 1) 00:08:03.795 33675.422 - 33877.071: 97.0473% ( 62) 00:08:03.795 33877.071 - 34078.720: 97.0588% ( 1) 00:08:03.795 66947.545 - 67350.843: 97.0703% ( 1) 00:08:03.795 67350.843 - 67754.142: 97.2886% ( 19) 00:08:03.795 69367.335 - 69770.634: 97.5414% ( 22) 00:08:03.795 69770.634 - 70173.932: 97.7941% ( 22) 00:08:03.795 71787.126 - 72190.425: 98.2767% ( 42) 00:08:03.795 72190.425 - 72593.723: 98.2996% ( 2) 00:08:03.795 74610.215 - 75013.514: 98.4145% ( 10) 00:08:03.795 75013.514 - 75416.812: 98.5294% ( 10) 00:08:03.795 285535.311 - 287148.505: 98.8281% ( 26) 00:08:03.795 288761.698 - 290374.892: 99.2647% ( 38) 00:08:03.795 290374.892 - 291988.086: 99.7013% ( 38) 00:08:03.795 291988.086 - 293601.280: 99.8277% ( 11) 00:08:03.795 293601.280 - 295214.474: 100.0000% ( 15) 00:08:03.795 00:08:03.795 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:03.795 ============================================================================== 00:08:03.795 Range in us Cumulative IO count 00:08:03.795 5646.178 - 5671.385: 0.0115% ( 1) 00:08:03.795 5747.003 - 5772.209: 0.0230% ( 1) 00:08:03.795 5772.209 - 5797.415: 0.0574% ( 3) 00:08:03.795 5797.415 - 5822.622: 0.1034% ( 4) 00:08:03.795 5822.622 - 5847.828: 0.1264% ( 2) 00:08:03.795 5847.828 - 5873.034: 0.1608% ( 3) 00:08:03.795 5873.034 - 5898.240: 0.1838% ( 2) 00:08:03.795 5898.240 - 5923.446: 0.2183% ( 3) 00:08:03.795 5923.446 - 5948.652: 0.2528% ( 3) 00:08:03.795 5948.652 - 5973.858: 0.2872% ( 3) 00:08:03.795 5973.858 - 5999.065: 0.3102% ( 2) 00:08:03.795 5999.065 - 6024.271: 0.3332% ( 2) 00:08:03.795 6024.271 - 6049.477: 0.3791% ( 4) 00:08:03.795 6049.477 - 6074.683: 0.4251% ( 4) 00:08:03.795 6074.683 - 6099.889: 0.4366% ( 1) 00:08:03.795 6099.889 - 6125.095: 0.4710% ( 3) 00:08:03.795 6125.095 - 6150.302: 0.5400% ( 6) 00:08:03.795 6150.302 - 6175.508: 0.5974% ( 5) 00:08:03.795 6175.508 - 6200.714: 0.6319% ( 3) 00:08:03.795 6200.714 - 6225.920: 0.7123% ( 7) 00:08:03.795 6225.920 - 6251.126: 0.7927% ( 7) 00:08:03.795 6251.126 - 6276.332: 0.8961% ( 9) 00:08:03.795 6276.332 - 6301.538: 0.9881% ( 8) 00:08:03.795 6301.538 - 6326.745: 1.0915% ( 9) 00:08:03.795 6326.745 - 6351.951: 1.2063% ( 10) 00:08:03.795 6351.951 - 6377.157: 1.4017% ( 17) 00:08:03.795 6377.157 - 6402.363: 1.6429% ( 21) 00:08:03.795 6402.363 - 6427.569: 1.8038% ( 14) 00:08:03.795 6427.569 - 6452.775: 1.8957% ( 8) 00:08:03.795 6452.775 - 6503.188: 1.9876% ( 8) 00:08:03.795 6503.188 - 6553.600: 2.0680% ( 7) 00:08:03.795 6553.600 - 6604.012: 2.1369% ( 6) 00:08:03.795 6604.012 - 6654.425: 2.2059% ( 6) 00:08:03.795 7108.135 - 7158.548: 2.2174% ( 1) 00:08:03.795 7158.548 - 7208.960: 2.2289% ( 1) 00:08:03.795 7208.960 - 7259.372: 2.2863% ( 5) 00:08:03.795 7259.372 - 7309.785: 2.3667% ( 7) 00:08:03.795 7309.785 - 7360.197: 2.4701% ( 9) 00:08:03.795 7360.197 - 7410.609: 2.6310% ( 14) 00:08:03.795 7410.609 - 7461.022: 2.8033% ( 15) 00:08:03.795 7461.022 - 7511.434: 3.1135% ( 27) 00:08:03.795 7511.434 - 7561.846: 3.3663% ( 22) 00:08:03.795 7561.846 - 7612.258: 3.7339% ( 32) 00:08:03.795 7612.258 - 7662.671: 4.1820% ( 39) 00:08:03.795 7662.671 - 7713.083: 4.7794% ( 52) 00:08:03.795 7713.083 - 7763.495: 5.2849% ( 44) 00:08:03.795 7763.495 - 7813.908: 6.0662% ( 68) 00:08:03.795 7813.908 - 7864.320: 7.1347% ( 93) 00:08:03.795 7864.320 - 7914.732: 8.3869% ( 109) 00:08:03.795 7914.732 - 7965.145: 10.0069% ( 141) 00:08:03.795 7965.145 - 8015.557: 12.0060% ( 174) 00:08:03.795 8015.557 - 8065.969: 13.7178% ( 149) 00:08:03.795 8065.969 - 8116.382: 16.1305% ( 210) 00:08:03.795 8116.382 - 8166.794: 18.9453% ( 245) 00:08:03.795 8166.794 - 8217.206: 21.7831% ( 247) 00:08:03.795 8217.206 - 8267.618: 24.8506% ( 267) 00:08:03.795 8267.618 - 8318.031: 27.9986% ( 274) 00:08:03.795 8318.031 - 8368.443: 31.3534% ( 292) 00:08:03.795 8368.443 - 8418.855: 34.8346% ( 303) 00:08:03.795 8418.855 - 8469.268: 37.9136% ( 268) 00:08:03.795 8469.268 - 8519.680: 40.5446% ( 229) 00:08:03.795 8519.680 - 8570.092: 42.8309% ( 199) 00:08:03.795 8570.092 - 8620.505: 44.9678% ( 186) 00:08:03.795 8620.505 - 8670.917: 46.7946% ( 159) 00:08:03.795 8670.917 - 8721.329: 48.7937% ( 174) 00:08:03.795 8721.329 - 8771.742: 50.5859% ( 156) 00:08:03.795 8771.742 - 8822.154: 52.3438% ( 153) 00:08:03.795 8822.154 - 8872.566: 53.9982% ( 144) 00:08:03.795 8872.566 - 8922.978: 56.0432% ( 178) 00:08:03.795 8922.978 - 8973.391: 58.1801% ( 186) 00:08:03.795 8973.391 - 9023.803: 60.1677% ( 173) 00:08:03.795 9023.803 - 9074.215: 61.8107% ( 143) 00:08:03.795 9074.215 - 9124.628: 63.2238% ( 123) 00:08:03.795 9124.628 - 9175.040: 64.4187% ( 104) 00:08:03.795 9175.040 - 9225.452: 65.4986% ( 94) 00:08:03.795 9225.452 - 9275.865: 66.4177% ( 80) 00:08:03.795 9275.865 - 9326.277: 67.3254% ( 79) 00:08:03.795 9326.277 - 9376.689: 68.2675% ( 82) 00:08:03.795 9376.689 - 9427.102: 69.1866% ( 80) 00:08:03.795 9427.102 - 9477.514: 70.0942% ( 79) 00:08:03.795 9477.514 - 9527.926: 70.8755% ( 68) 00:08:03.795 9527.926 - 9578.338: 71.6797% ( 70) 00:08:03.795 9578.338 - 9628.751: 72.4494% ( 67) 00:08:03.795 9628.751 - 9679.163: 73.2422% ( 69) 00:08:03.795 9679.163 - 9729.575: 74.0579% ( 71) 00:08:03.795 9729.575 - 9779.988: 74.7472% ( 60) 00:08:03.795 9779.988 - 9830.400: 75.4481% ( 61) 00:08:03.795 9830.400 - 9880.812: 76.0685% ( 54) 00:08:03.795 9880.812 - 9931.225: 76.8038% ( 64) 00:08:03.795 9931.225 - 9981.637: 77.3323% ( 46) 00:08:03.795 9981.637 - 10032.049: 77.8033% ( 41) 00:08:03.795 10032.049 - 10082.462: 78.3088% ( 44) 00:08:03.795 10082.462 - 10132.874: 78.7914% ( 42) 00:08:03.795 10132.874 - 10183.286: 79.1935% ( 35) 00:08:03.795 10183.286 - 10233.698: 79.6186% ( 37) 00:08:03.795 10233.698 - 10284.111: 79.9977% ( 33) 00:08:03.795 10284.111 - 10334.523: 80.6181% ( 54) 00:08:03.795 10334.523 - 10384.935: 81.0432% ( 37) 00:08:03.795 10384.935 - 10435.348: 81.4913% ( 39) 00:08:03.795 10435.348 - 10485.760: 81.9393% ( 39) 00:08:03.795 10485.760 - 10536.172: 82.5597% ( 54) 00:08:03.795 10536.172 - 10586.585: 83.0538% ( 43) 00:08:03.795 10586.585 - 10636.997: 83.4674% ( 36) 00:08:03.795 10636.997 - 10687.409: 83.8120% ( 30) 00:08:03.795 10687.409 - 10737.822: 84.0418% ( 20) 00:08:03.795 10737.822 - 10788.234: 84.2486% ( 18) 00:08:03.795 10788.234 - 10838.646: 84.6392% ( 34) 00:08:03.795 10838.646 - 10889.058: 85.0873% ( 39) 00:08:03.795 10889.058 - 10939.471: 85.4550% ( 32) 00:08:03.795 10939.471 - 10989.883: 85.8111% ( 31) 00:08:03.795 10989.883 - 11040.295: 86.2362% ( 37) 00:08:03.795 11040.295 - 11090.708: 86.6728% ( 38) 00:08:03.795 11090.708 - 11141.120: 86.9830% ( 27) 00:08:03.795 11141.120 - 11191.532: 87.3392% ( 31) 00:08:03.795 11191.532 - 11241.945: 87.7183% ( 33) 00:08:03.795 11241.945 - 11292.357: 88.2468% ( 46) 00:08:03.795 11292.357 - 11342.769: 88.5570% ( 27) 00:08:03.795 11342.769 - 11393.182: 88.9017% ( 30) 00:08:03.795 11393.182 - 11443.594: 89.2233% ( 28) 00:08:03.795 11443.594 - 11494.006: 89.5106% ( 25) 00:08:03.795 11494.006 - 11544.418: 89.7748% ( 23) 00:08:03.795 11544.418 - 11594.831: 90.0506% ( 24) 00:08:03.795 11594.831 - 11645.243: 90.2688% ( 19) 00:08:03.795 11645.243 - 11695.655: 90.4527% ( 16) 00:08:03.795 11695.655 - 11746.068: 90.5905% ( 12) 00:08:03.795 11746.068 - 11796.480: 90.6824% ( 8) 00:08:03.795 11796.480 - 11846.892: 90.7629% ( 7) 00:08:03.795 11846.892 - 11897.305: 90.8548% ( 8) 00:08:03.795 11897.305 - 11947.717: 90.9237% ( 6) 00:08:03.795 11947.717 - 11998.129: 91.0041% ( 7) 00:08:03.795 11998.129 - 12048.542: 91.0386% ( 3) 00:08:03.795 12048.542 - 12098.954: 91.0616% ( 2) 00:08:03.795 12098.954 - 12149.366: 91.0846% ( 2) 00:08:03.795 12149.366 - 12199.778: 91.1075% ( 2) 00:08:03.795 12199.778 - 12250.191: 91.1305% ( 2) 00:08:03.795 12250.191 - 12300.603: 91.1650% ( 3) 00:08:03.795 12300.603 - 12351.015: 91.1765% ( 1) 00:08:03.795 12401.428 - 12451.840: 91.1880% ( 1) 00:08:03.795 12451.840 - 12502.252: 91.2224% ( 3) 00:08:03.795 12502.252 - 12552.665: 91.2454% ( 2) 00:08:03.795 12552.665 - 12603.077: 91.2684% ( 2) 00:08:03.795 12603.077 - 12653.489: 91.2914% ( 2) 00:08:03.795 12653.489 - 12703.902: 91.3143% ( 2) 00:08:03.796 12703.902 - 12754.314: 91.3833% ( 6) 00:08:03.796 12754.314 - 12804.726: 91.4522% ( 6) 00:08:03.796 12804.726 - 12855.138: 91.5556% ( 9) 00:08:03.796 12855.138 - 12905.551: 91.6590% ( 9) 00:08:03.796 12905.551 - 13006.375: 92.0152% ( 31) 00:08:03.796 13006.375 - 13107.200: 92.7849% ( 67) 00:08:03.796 13107.200 - 13208.025: 93.1411% ( 31) 00:08:03.796 13208.025 - 13308.849: 93.5777% ( 38) 00:08:03.796 13308.849 - 13409.674: 93.8419% ( 23) 00:08:03.796 13409.674 - 13510.498: 93.9108% ( 6) 00:08:03.796 13510.498 - 13611.323: 93.9453% ( 3) 00:08:03.796 13611.323 - 13712.148: 93.9798% ( 3) 00:08:03.796 13712.148 - 13812.972: 94.0257% ( 4) 00:08:03.796 13812.972 - 13913.797: 94.0602% ( 3) 00:08:03.796 13913.797 - 14014.622: 94.0947% ( 3) 00:08:03.796 14014.622 - 14115.446: 94.1176% ( 2) 00:08:03.796 20769.871 - 20870.695: 94.1521% ( 3) 00:08:03.796 20870.695 - 20971.520: 94.1981% ( 4) 00:08:03.796 20971.520 - 21072.345: 94.2440% ( 4) 00:08:03.796 21072.345 - 21173.169: 94.2900% ( 4) 00:08:03.796 21173.169 - 21273.994: 94.3359% ( 4) 00:08:03.796 21273.994 - 21374.818: 94.3819% ( 4) 00:08:03.796 21374.818 - 21475.643: 94.4278% ( 4) 00:08:03.796 21475.643 - 21576.468: 94.4738% ( 4) 00:08:03.796 21576.468 - 21677.292: 94.5198% ( 4) 00:08:03.796 21677.292 - 21778.117: 94.5772% ( 5) 00:08:03.796 21778.117 - 21878.942: 94.6117% ( 3) 00:08:03.796 21878.942 - 21979.766: 94.6576% ( 4) 00:08:03.796 21979.766 - 22080.591: 94.7036% ( 4) 00:08:03.796 22080.591 - 22181.415: 94.7495% ( 4) 00:08:03.796 22181.415 - 22282.240: 94.7955% ( 4) 00:08:03.796 22282.240 - 22383.065: 94.8529% ( 5) 00:08:03.796 26416.049 - 26617.698: 94.8874% ( 3) 00:08:03.796 26617.698 - 26819.348: 94.9793% ( 8) 00:08:03.796 26819.348 - 27020.997: 95.0597% ( 7) 00:08:03.796 27020.997 - 27222.646: 95.1631% ( 9) 00:08:03.796 27222.646 - 27424.295: 95.2551% ( 8) 00:08:03.796 27424.295 - 27625.945: 95.3470% ( 8) 00:08:03.796 27625.945 - 27827.594: 95.4389% ( 8) 00:08:03.796 27827.594 - 28029.243: 95.5423% ( 9) 00:08:03.796 28029.243 - 28230.892: 95.5882% ( 4) 00:08:03.796 32667.175 - 32868.825: 95.6342% ( 4) 00:08:03.796 32868.825 - 33070.474: 96.3235% ( 60) 00:08:03.796 33877.071 - 34078.720: 97.0588% ( 64) 00:08:03.796 70173.932 - 70577.231: 97.6907% ( 55) 00:08:03.796 70577.231 - 70980.529: 97.7941% ( 9) 00:08:03.796 72190.425 - 72593.723: 98.5294% ( 64) 00:08:03.796 285535.311 - 287148.505: 98.5409% ( 1) 00:08:03.796 288761.698 - 290374.892: 99.2647% ( 63) 00:08:03.796 290374.892 - 291988.086: 99.9885% ( 63) 00:08:03.796 293601.280 - 295214.474: 100.0000% ( 1) 00:08:03.796 00:08:03.796 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:03.796 ============================================================================== 00:08:03.796 Range in us Cumulative IO count 00:08:03.796 5595.766 - 5620.972: 0.0342% ( 3) 00:08:03.796 5620.972 - 5646.178: 0.0684% ( 3) 00:08:03.796 5646.178 - 5671.385: 0.1483% ( 7) 00:08:03.796 5671.385 - 5696.591: 0.1939% ( 4) 00:08:03.796 5696.591 - 5721.797: 0.2737% ( 7) 00:08:03.796 5721.797 - 5747.003: 0.3307% ( 5) 00:08:03.796 5747.003 - 5772.209: 0.3878% ( 5) 00:08:03.796 5772.209 - 5797.415: 0.4562% ( 6) 00:08:03.796 5797.415 - 5822.622: 0.5246% ( 6) 00:08:03.796 5822.622 - 5847.828: 0.6843% ( 14) 00:08:03.796 5847.828 - 5873.034: 0.7641% ( 7) 00:08:03.796 5873.034 - 5898.240: 0.7984% ( 3) 00:08:03.796 5898.240 - 5923.446: 0.8554% ( 5) 00:08:03.796 5923.446 - 5948.652: 0.9238% ( 6) 00:08:03.796 5948.652 - 5973.858: 0.9694% ( 4) 00:08:03.796 5973.858 - 5999.065: 1.0265% ( 5) 00:08:03.796 5999.065 - 6024.271: 1.0721% ( 4) 00:08:03.796 6024.271 - 6049.477: 1.1519% ( 7) 00:08:03.796 6049.477 - 6074.683: 1.2089% ( 5) 00:08:03.796 6074.683 - 6099.889: 1.2660% ( 5) 00:08:03.796 6099.889 - 6125.095: 1.3230% ( 5) 00:08:03.796 6125.095 - 6150.302: 1.4028% ( 7) 00:08:03.796 6150.302 - 6175.508: 1.4713% ( 6) 00:08:03.796 6175.508 - 6200.714: 1.5283% ( 5) 00:08:03.796 6200.714 - 6225.920: 1.5967% ( 6) 00:08:03.796 6225.920 - 6251.126: 1.8020% ( 18) 00:08:03.796 6251.126 - 6276.332: 1.8818% ( 7) 00:08:03.796 6276.332 - 6301.538: 1.9503% ( 6) 00:08:03.796 6301.538 - 6326.745: 1.9845% ( 3) 00:08:03.796 6326.745 - 6351.951: 2.0187% ( 3) 00:08:03.796 6351.951 - 6377.157: 2.0643% ( 4) 00:08:03.796 6377.157 - 6402.363: 2.1099% ( 4) 00:08:03.796 6402.363 - 6427.569: 2.1214% ( 1) 00:08:03.796 6427.569 - 6452.775: 2.1442% ( 2) 00:08:03.796 6452.775 - 6503.188: 2.1784% ( 3) 00:08:03.796 6503.188 - 6553.600: 2.1898% ( 1) 00:08:03.796 7007.311 - 7057.723: 2.2012% ( 1) 00:08:03.796 7057.723 - 7108.135: 2.2696% ( 6) 00:08:03.796 7108.135 - 7158.548: 2.3495% ( 7) 00:08:03.796 7158.548 - 7208.960: 2.4179% ( 6) 00:08:03.796 7208.960 - 7259.372: 2.5319% ( 10) 00:08:03.796 7259.372 - 7309.785: 2.6460% ( 10) 00:08:03.796 7309.785 - 7360.197: 2.7828% ( 12) 00:08:03.796 7360.197 - 7410.609: 3.1022% ( 28) 00:08:03.796 7410.609 - 7461.022: 3.4443% ( 30) 00:08:03.796 7461.022 - 7511.434: 3.6724% ( 20) 00:08:03.796 7511.434 - 7561.846: 3.9005% ( 20) 00:08:03.796 7561.846 - 7612.258: 4.2085% ( 27) 00:08:03.796 7612.258 - 7662.671: 4.5164% ( 27) 00:08:03.796 7662.671 - 7713.083: 5.2121% ( 61) 00:08:03.796 7713.083 - 7763.495: 5.9193% ( 62) 00:08:03.796 7763.495 - 7813.908: 6.5922% ( 59) 00:08:03.796 7813.908 - 7864.320: 7.6186% ( 90) 00:08:03.796 7864.320 - 7914.732: 9.0557% ( 126) 00:08:03.796 7914.732 - 7965.145: 10.2646% ( 106) 00:08:03.796 7965.145 - 8015.557: 11.7815% ( 133) 00:08:03.796 8015.557 - 8065.969: 14.0625% ( 200) 00:08:03.796 8065.969 - 8116.382: 16.4462% ( 209) 00:08:03.796 8116.382 - 8166.794: 19.2860% ( 249) 00:08:03.796 8166.794 - 8217.206: 22.2856% ( 263) 00:08:03.796 8217.206 - 8267.618: 25.3422% ( 268) 00:08:03.796 8267.618 - 8318.031: 28.3987% ( 268) 00:08:03.796 8318.031 - 8368.443: 31.6606% ( 286) 00:08:03.796 8368.443 - 8418.855: 34.3636% ( 237) 00:08:03.796 8418.855 - 8469.268: 36.9297% ( 225) 00:08:03.796 8469.268 - 8519.680: 39.6670% ( 240) 00:08:03.796 8519.680 - 8570.092: 42.1647% ( 219) 00:08:03.796 8570.092 - 8620.505: 44.4115% ( 197) 00:08:03.796 8620.505 - 8670.917: 46.4986% ( 183) 00:08:03.796 8670.917 - 8721.329: 48.8823% ( 209) 00:08:03.796 8721.329 - 8771.742: 51.0493% ( 190) 00:08:03.796 8771.742 - 8822.154: 53.1250% ( 182) 00:08:03.796 8822.154 - 8872.566: 55.1323% ( 176) 00:08:03.796 8872.566 - 8922.978: 56.9343% ( 158) 00:08:03.796 8922.978 - 8973.391: 58.5880% ( 145) 00:08:03.796 8973.391 - 9023.803: 60.1049% ( 133) 00:08:03.796 9023.803 - 9074.215: 61.4621% ( 119) 00:08:03.796 9074.215 - 9124.628: 62.9106% ( 127) 00:08:03.796 9124.628 - 9175.040: 64.1994% ( 113) 00:08:03.796 9175.040 - 9225.452: 65.4083% ( 106) 00:08:03.796 9225.452 - 9275.865: 66.6401% ( 108) 00:08:03.796 9275.865 - 9326.277: 67.5411% ( 79) 00:08:03.796 9326.277 - 9376.689: 68.4877% ( 83) 00:08:03.796 9376.689 - 9427.102: 69.2860% ( 70) 00:08:03.796 9427.102 - 9477.514: 70.0844% ( 70) 00:08:03.796 9477.514 - 9527.926: 71.0196% ( 82) 00:08:03.796 9527.926 - 9578.338: 71.7381% ( 63) 00:08:03.796 9578.338 - 9628.751: 72.4110% ( 59) 00:08:03.796 9628.751 - 9679.163: 73.1182% ( 62) 00:08:03.796 9679.163 - 9729.575: 73.7682% ( 57) 00:08:03.796 9729.575 - 9779.988: 74.3955% ( 55) 00:08:03.796 9779.988 - 9830.400: 75.0570% ( 58) 00:08:03.796 9830.400 - 9880.812: 75.7413% ( 60) 00:08:03.796 9880.812 - 9931.225: 76.5283% ( 69) 00:08:03.796 9931.225 - 9981.637: 77.0073% ( 42) 00:08:03.796 9981.637 - 10032.049: 77.4635% ( 40) 00:08:03.796 10032.049 - 10082.462: 78.0338% ( 50) 00:08:03.796 10082.462 - 10132.874: 78.5242% ( 43) 00:08:03.796 10132.874 - 10183.286: 78.9462% ( 37) 00:08:03.796 10183.286 - 10233.698: 79.4708% ( 46) 00:08:03.796 10233.698 - 10284.111: 79.9384% ( 41) 00:08:03.796 10284.111 - 10334.523: 80.4060% ( 41) 00:08:03.796 10334.523 - 10384.935: 80.9307% ( 46) 00:08:03.796 10384.935 - 10435.348: 81.4325% ( 44) 00:08:03.796 10435.348 - 10485.760: 81.7860% ( 31) 00:08:03.796 10485.760 - 10536.172: 82.2993% ( 45) 00:08:03.796 10536.172 - 10586.585: 82.7213% ( 37) 00:08:03.796 10586.585 - 10636.997: 82.9950% ( 24) 00:08:03.796 10636.997 - 10687.409: 83.2687% ( 24) 00:08:03.796 10687.409 - 10737.822: 83.5995% ( 29) 00:08:03.796 10737.822 - 10788.234: 83.9644% ( 32) 00:08:03.796 10788.234 - 10838.646: 84.5233% ( 49) 00:08:03.796 10838.646 - 10889.058: 85.0593% ( 47) 00:08:03.796 10889.058 - 10939.471: 85.6524% ( 52) 00:08:03.796 10939.471 - 10989.883: 86.3595% ( 62) 00:08:03.796 10989.883 - 11040.295: 87.0438% ( 60) 00:08:03.796 11040.295 - 11090.708: 87.8878% ( 74) 00:08:03.796 11090.708 - 11141.120: 88.6405% ( 66) 00:08:03.796 11141.120 - 11191.532: 89.0283% ( 34) 00:08:03.796 11191.532 - 11241.945: 89.3818% ( 31) 00:08:03.796 11241.945 - 11292.357: 89.6784% ( 26) 00:08:03.796 11292.357 - 11342.769: 89.9863% ( 27) 00:08:03.796 11342.769 - 11393.182: 90.2372% ( 22) 00:08:03.796 11393.182 - 11443.594: 90.3855% ( 13) 00:08:03.796 11443.594 - 11494.006: 90.5224% ( 12) 00:08:03.796 11494.006 - 11544.418: 90.6478% ( 11) 00:08:03.796 11544.418 - 11594.831: 90.7505% ( 9) 00:08:03.796 11594.831 - 11645.243: 90.8417% ( 8) 00:08:03.796 11645.243 - 11695.655: 90.9557% ( 10) 00:08:03.796 11695.655 - 11746.068: 91.0584% ( 9) 00:08:03.796 11746.068 - 11796.480: 91.1496% ( 8) 00:08:03.797 11796.480 - 11846.892: 91.1953% ( 4) 00:08:03.797 11846.892 - 11897.305: 91.2295% ( 3) 00:08:03.797 11897.305 - 11947.717: 91.2409% ( 1) 00:08:03.797 12401.428 - 12451.840: 91.2637% ( 2) 00:08:03.797 12451.840 - 12502.252: 91.3207% ( 5) 00:08:03.797 12502.252 - 12552.665: 91.3777% ( 5) 00:08:03.797 12552.665 - 12603.077: 91.4348% ( 5) 00:08:03.797 12603.077 - 12653.489: 91.5032% ( 6) 00:08:03.797 12653.489 - 12703.902: 91.5944% ( 8) 00:08:03.797 12703.902 - 12754.314: 91.6629% ( 6) 00:08:03.797 12754.314 - 12804.726: 91.7085% ( 4) 00:08:03.797 12804.726 - 12855.138: 91.7313% ( 2) 00:08:03.797 12855.138 - 12905.551: 91.7427% ( 1) 00:08:03.797 12905.551 - 13006.375: 91.9366% ( 17) 00:08:03.797 13006.375 - 13107.200: 92.2103% ( 24) 00:08:03.797 13107.200 - 13208.025: 92.5182% ( 27) 00:08:03.797 13208.025 - 13308.849: 92.9060% ( 34) 00:08:03.797 13308.849 - 13409.674: 93.2254% ( 28) 00:08:03.797 13409.674 - 13510.498: 93.4877% ( 23) 00:08:03.797 13510.498 - 13611.323: 93.6474% ( 14) 00:08:03.797 13611.323 - 13712.148: 93.8641% ( 19) 00:08:03.797 13712.148 - 13812.972: 94.0693% ( 18) 00:08:03.797 13812.972 - 13913.797: 94.1150% ( 4) 00:08:03.797 13913.797 - 14014.622: 94.1606% ( 4) 00:08:03.797 15022.868 - 15123.692: 94.2062% ( 4) 00:08:03.797 15123.692 - 15224.517: 94.2518% ( 4) 00:08:03.797 15224.517 - 15325.342: 94.2974% ( 4) 00:08:03.797 15325.342 - 15426.166: 94.3431% ( 4) 00:08:03.797 15426.166 - 15526.991: 94.3887% ( 4) 00:08:03.797 15526.991 - 15627.815: 94.4343% ( 4) 00:08:03.797 15627.815 - 15728.640: 94.4913% ( 5) 00:08:03.797 15728.640 - 15829.465: 94.5370% ( 4) 00:08:03.797 15829.465 - 15930.289: 94.5826% ( 4) 00:08:03.797 15930.289 - 16031.114: 94.6282% ( 4) 00:08:03.797 16031.114 - 16131.938: 94.6738% ( 4) 00:08:03.797 16131.938 - 16232.763: 94.7194% ( 4) 00:08:03.797 16232.763 - 16333.588: 94.7765% ( 5) 00:08:03.797 16333.588 - 16434.412: 94.8221% ( 4) 00:08:03.797 16434.412 - 16535.237: 94.8677% ( 4) 00:08:03.797 16535.237 - 16636.062: 94.8905% ( 2) 00:08:03.797 20467.397 - 20568.222: 94.9019% ( 1) 00:08:03.797 20568.222 - 20669.046: 94.9361% ( 3) 00:08:03.797 20669.046 - 20769.871: 94.9932% ( 5) 00:08:03.797 20769.871 - 20870.695: 95.0274% ( 3) 00:08:03.797 20870.695 - 20971.520: 95.0844% ( 5) 00:08:03.797 20971.520 - 21072.345: 95.1300% ( 4) 00:08:03.797 21072.345 - 21173.169: 95.1756% ( 4) 00:08:03.797 21173.169 - 21273.994: 95.2213% ( 4) 00:08:03.797 21273.994 - 21374.818: 95.2669% ( 4) 00:08:03.797 21374.818 - 21475.643: 95.3125% ( 4) 00:08:03.797 21475.643 - 21576.468: 95.3581% ( 4) 00:08:03.797 21576.468 - 21677.292: 95.4151% ( 5) 00:08:03.797 21677.292 - 21778.117: 95.4608% ( 4) 00:08:03.797 21778.117 - 21878.942: 95.5064% ( 4) 00:08:03.797 21878.942 - 21979.766: 95.5520% ( 4) 00:08:03.797 21979.766 - 22080.591: 95.5976% ( 4) 00:08:03.797 22080.591 - 22181.415: 95.6204% ( 2) 00:08:03.797 32868.825 - 33070.474: 96.3276% ( 62) 00:08:03.797 33070.474 - 33272.123: 96.3504% ( 2) 00:08:03.797 33877.071 - 34078.720: 96.6469% ( 26) 00:08:03.797 34078.720 - 34280.369: 97.0119% ( 32) 00:08:03.797 34280.369 - 34482.018: 97.0803% ( 6) 00:08:03.797 68157.440 - 68560.738: 97.3084% ( 20) 00:08:03.797 70577.231 - 70980.529: 97.8102% ( 44) 00:08:03.797 72190.425 - 72593.723: 98.3006% ( 43) 00:08:03.797 72593.723 - 72997.022: 98.3120% ( 1) 00:08:03.797 75013.514 - 75416.812: 98.4603% ( 13) 00:08:03.797 75416.812 - 75820.111: 98.5401% ( 7) 00:08:03.797 287148.505 - 288761.698: 98.7340% ( 17) 00:08:03.797 288761.698 - 290374.892: 99.2701% ( 47) 00:08:03.797 290374.892 - 291988.086: 99.8061% ( 47) 00:08:03.797 293601.280 - 295214.474: 100.0000% ( 17) 00:08:03.797 00:08:03.797 ************************************ 00:08:03.797 END TEST nvme_perf 00:08:03.797 ************************************ 00:08:03.797 09:20:29 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:03.797 00:08:03.797 real 0m2.530s 00:08:03.797 user 0m2.220s 00:08:03.797 sys 0m0.198s 00:08:03.797 09:20:29 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.797 09:20:29 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.797 09:20:29 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:03.797 09:20:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:03.797 09:20:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.797 09:20:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.797 ************************************ 00:08:03.797 START TEST nvme_hello_world 00:08:03.797 ************************************ 00:08:03.797 09:20:29 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:03.797 Initializing NVMe Controllers 00:08:03.797 Attached to 0000:00:13.0 00:08:03.797 Namespace ID: 1 size: 1GB 00:08:03.797 Attached to 0000:00:10.0 00:08:03.797 Namespace ID: 1 size: 6GB 00:08:03.797 Attached to 0000:00:11.0 00:08:03.797 Namespace ID: 1 size: 5GB 00:08:03.797 Attached to 0000:00:12.0 00:08:03.797 Namespace ID: 1 size: 4GB 00:08:03.797 Namespace ID: 2 size: 4GB 00:08:03.797 Namespace ID: 3 size: 4GB 00:08:03.797 Initialization complete. 00:08:03.797 INFO: using host memory buffer for IO 00:08:03.797 Hello world! 00:08:03.797 INFO: using host memory buffer for IO 00:08:03.797 Hello world! 00:08:03.797 INFO: using host memory buffer for IO 00:08:03.797 Hello world! 00:08:03.797 INFO: using host memory buffer for IO 00:08:03.797 Hello world! 00:08:03.797 INFO: using host memory buffer for IO 00:08:03.797 Hello world! 00:08:03.797 INFO: using host memory buffer for IO 00:08:03.797 Hello world! 00:08:04.058 00:08:04.058 real 0m0.219s 00:08:04.058 user 0m0.084s 00:08:04.058 sys 0m0.092s 00:08:04.058 09:20:29 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.058 ************************************ 00:08:04.058 END TEST nvme_hello_world 00:08:04.058 ************************************ 00:08:04.058 09:20:29 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:04.058 09:20:29 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.058 09:20:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.058 09:20:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.058 09:20:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.058 ************************************ 00:08:04.058 START TEST nvme_sgl 00:08:04.058 ************************************ 00:08:04.058 09:20:29 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.058 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:04.058 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:04.058 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:04.058 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:04.058 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:04.058 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:04.058 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:04.320 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:04.320 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:04.320 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:04.320 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:04.320 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:04.320 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:04.320 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:04.320 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:04.320 NVMe Readv/Writev Request test 00:08:04.320 Attached to 0000:00:13.0 00:08:04.320 Attached to 0000:00:10.0 00:08:04.320 Attached to 0000:00:11.0 00:08:04.320 Attached to 0000:00:12.0 00:08:04.320 0000:00:10.0: build_io_request_2 test passed 00:08:04.320 0000:00:10.0: build_io_request_4 test passed 00:08:04.320 0000:00:10.0: build_io_request_5 test passed 00:08:04.320 0000:00:10.0: build_io_request_6 test passed 00:08:04.320 0000:00:10.0: build_io_request_7 test passed 00:08:04.320 0000:00:10.0: build_io_request_10 test passed 00:08:04.320 0000:00:11.0: build_io_request_2 test passed 00:08:04.320 0000:00:11.0: build_io_request_4 test passed 00:08:04.320 0000:00:11.0: build_io_request_5 test passed 00:08:04.320 0000:00:11.0: build_io_request_6 test passed 00:08:04.320 0000:00:11.0: build_io_request_7 test passed 00:08:04.320 0000:00:11.0: build_io_request_10 test passed 00:08:04.320 Cleaning up... 00:08:04.320 00:08:04.320 real 0m0.280s 00:08:04.320 user 0m0.148s 00:08:04.320 sys 0m0.089s 00:08:04.320 09:20:29 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.320 ************************************ 00:08:04.320 END TEST nvme_sgl 00:08:04.320 ************************************ 00:08:04.320 09:20:29 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:04.320 09:20:29 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:04.320 09:20:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.320 09:20:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.320 09:20:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.320 ************************************ 00:08:04.320 START TEST nvme_e2edp 00:08:04.320 ************************************ 00:08:04.320 09:20:29 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:04.583 NVMe Write/Read with End-to-End data protection test 00:08:04.583 Attached to 0000:00:13.0 00:08:04.583 Attached to 0000:00:10.0 00:08:04.583 Attached to 0000:00:11.0 00:08:04.583 Attached to 0000:00:12.0 00:08:04.583 Cleaning up... 00:08:04.583 00:08:04.583 real 0m0.211s 00:08:04.583 user 0m0.066s 00:08:04.583 sys 0m0.102s 00:08:04.583 ************************************ 00:08:04.583 END TEST nvme_e2edp 00:08:04.583 ************************************ 00:08:04.583 09:20:29 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.583 09:20:29 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:04.583 09:20:29 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:04.583 09:20:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.583 09:20:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.583 09:20:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.583 ************************************ 00:08:04.583 START TEST nvme_reserve 00:08:04.583 ************************************ 00:08:04.583 09:20:29 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:04.845 ===================================================== 00:08:04.845 NVMe Controller at PCI bus 0, device 19, function 0 00:08:04.845 ===================================================== 00:08:04.845 Reservations: Not Supported 00:08:04.845 ===================================================== 00:08:04.845 NVMe Controller at PCI bus 0, device 16, function 0 00:08:04.845 ===================================================== 00:08:04.845 Reservations: Not Supported 00:08:04.845 ===================================================== 00:08:04.845 NVMe Controller at PCI bus 0, device 17, function 0 00:08:04.845 ===================================================== 00:08:04.845 Reservations: Not Supported 00:08:04.845 ===================================================== 00:08:04.845 NVMe Controller at PCI bus 0, device 18, function 0 00:08:04.845 ===================================================== 00:08:04.845 Reservations: Not Supported 00:08:04.845 Reservation test passed 00:08:04.845 00:08:04.845 real 0m0.230s 00:08:04.845 user 0m0.079s 00:08:04.845 sys 0m0.095s 00:08:04.845 ************************************ 00:08:04.845 END TEST nvme_reserve 00:08:04.845 ************************************ 00:08:04.845 09:20:30 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.845 09:20:30 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:04.845 09:20:30 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:04.845 09:20:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.845 09:20:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.845 09:20:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.845 ************************************ 00:08:04.845 START TEST nvme_err_injection 00:08:04.845 ************************************ 00:08:04.845 09:20:30 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.109 NVMe Error Injection test 00:08:05.109 Attached to 0000:00:13.0 00:08:05.109 Attached to 0000:00:10.0 00:08:05.109 Attached to 0000:00:11.0 00:08:05.109 Attached to 0000:00:12.0 00:08:05.109 0000:00:13.0: get features failed as expected 00:08:05.109 0000:00:10.0: get features failed as expected 00:08:05.109 0000:00:11.0: get features failed as expected 00:08:05.109 0000:00:12.0: get features failed as expected 00:08:05.109 0000:00:13.0: get features successfully as expected 00:08:05.109 0000:00:10.0: get features successfully as expected 00:08:05.109 0000:00:11.0: get features successfully as expected 00:08:05.109 0000:00:12.0: get features successfully as expected 00:08:05.109 0000:00:13.0: read failed as expected 00:08:05.109 0000:00:10.0: read failed as expected 00:08:05.109 0000:00:11.0: read failed as expected 00:08:05.109 0000:00:12.0: read failed as expected 00:08:05.109 0000:00:13.0: read successfully as expected 00:08:05.109 0000:00:10.0: read successfully as expected 00:08:05.109 0000:00:11.0: read successfully as expected 00:08:05.109 0000:00:12.0: read successfully as expected 00:08:05.109 Cleaning up... 00:08:05.109 ************************************ 00:08:05.109 END TEST nvme_err_injection 00:08:05.109 ************************************ 00:08:05.109 00:08:05.109 real 0m0.232s 00:08:05.109 user 0m0.096s 00:08:05.109 sys 0m0.090s 00:08:05.109 09:20:30 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.109 09:20:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:05.109 09:20:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:05.109 09:20:30 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:05.109 09:20:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.109 09:20:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.109 ************************************ 00:08:05.109 START TEST nvme_overhead 00:08:05.109 ************************************ 00:08:05.109 09:20:30 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:06.539 Initializing NVMe Controllers 00:08:06.539 Attached to 0000:00:13.0 00:08:06.539 Attached to 0000:00:10.0 00:08:06.539 Attached to 0000:00:11.0 00:08:06.539 Attached to 0000:00:12.0 00:08:06.539 Initialization complete. Launching workers. 00:08:06.539 submit (in ns) avg, min, max = 11658.7, 9912.3, 356306.9 00:08:06.539 complete (in ns) avg, min, max = 7818.5, 7377.7, 69142.3 00:08:06.539 00:08:06.540 Submit histogram 00:08:06.540 ================ 00:08:06.540 Range in us Cumulative Count 00:08:06.540 9.895 - 9.945: 0.0067% ( 1) 00:08:06.540 10.092 - 10.142: 0.0133% ( 1) 00:08:06.540 10.289 - 10.338: 0.0200% ( 1) 00:08:06.540 10.437 - 10.486: 0.0266% ( 1) 00:08:06.540 10.535 - 10.585: 0.0333% ( 1) 00:08:06.540 10.634 - 10.683: 0.0399% ( 1) 00:08:06.540 10.880 - 10.929: 0.0466% ( 1) 00:08:06.540 10.929 - 10.978: 0.1264% ( 12) 00:08:06.540 10.978 - 11.028: 0.5919% ( 70) 00:08:06.540 11.028 - 11.077: 2.3277% ( 261) 00:08:06.540 11.077 - 11.126: 7.3224% ( 751) 00:08:06.540 11.126 - 11.175: 16.5204% ( 1383) 00:08:06.540 11.175 - 11.225: 27.7201% ( 1684) 00:08:06.540 11.225 - 11.274: 39.4520% ( 1764) 00:08:06.540 11.274 - 11.323: 49.1221% ( 1454) 00:08:06.540 11.323 - 11.372: 55.9524% ( 1027) 00:08:06.540 11.372 - 11.422: 60.2155% ( 641) 00:08:06.540 11.422 - 11.471: 63.8468% ( 546) 00:08:06.540 11.471 - 11.520: 66.9859% ( 472) 00:08:06.540 11.520 - 11.569: 69.2538% ( 341) 00:08:06.540 11.569 - 11.618: 71.8343% ( 388) 00:08:06.540 11.618 - 11.668: 74.7074% ( 432) 00:08:06.540 11.668 - 11.717: 76.8688% ( 325) 00:08:06.540 11.717 - 11.766: 78.4384% ( 236) 00:08:06.540 11.766 - 11.815: 79.5291% ( 164) 00:08:06.540 11.815 - 11.865: 80.3073% ( 117) 00:08:06.540 11.865 - 11.914: 81.0721% ( 115) 00:08:06.540 11.914 - 11.963: 81.9234% ( 128) 00:08:06.540 11.963 - 12.012: 83.1272% ( 181) 00:08:06.540 12.012 - 12.062: 84.6236% ( 225) 00:08:06.540 12.062 - 12.111: 86.2064% ( 238) 00:08:06.540 12.111 - 12.160: 88.1484% ( 292) 00:08:06.540 12.160 - 12.209: 89.8643% ( 258) 00:08:06.540 12.209 - 12.258: 91.3940% ( 230) 00:08:06.540 12.258 - 12.308: 92.8106% ( 213) 00:08:06.540 12.308 - 12.357: 94.0543% ( 187) 00:08:06.540 12.357 - 12.406: 94.8191% ( 115) 00:08:06.540 12.406 - 12.455: 95.3246% ( 76) 00:08:06.540 12.455 - 12.505: 95.7369% ( 62) 00:08:06.540 12.505 - 12.554: 96.0162% ( 42) 00:08:06.540 12.554 - 12.603: 96.2024% ( 28) 00:08:06.540 12.603 - 12.702: 96.4153% ( 32) 00:08:06.540 12.702 - 12.800: 96.5084% ( 14) 00:08:06.540 12.800 - 12.898: 96.6414% ( 20) 00:08:06.540 12.898 - 12.997: 96.6879% ( 7) 00:08:06.540 12.997 - 13.095: 96.8077% ( 18) 00:08:06.540 13.095 - 13.194: 96.9074% ( 15) 00:08:06.540 13.194 - 13.292: 97.0670% ( 24) 00:08:06.540 13.292 - 13.391: 97.2732% ( 31) 00:08:06.540 13.391 - 13.489: 97.3597% ( 13) 00:08:06.540 13.489 - 13.588: 97.4794% ( 18) 00:08:06.540 13.588 - 13.686: 97.5658% ( 13) 00:08:06.540 13.686 - 13.785: 97.6257% ( 9) 00:08:06.540 13.785 - 13.883: 97.6856% ( 9) 00:08:06.540 13.883 - 13.982: 97.7188% ( 5) 00:08:06.540 13.982 - 14.080: 97.7521% ( 5) 00:08:06.540 14.080 - 14.178: 97.7920% ( 6) 00:08:06.540 14.178 - 14.277: 97.8053% ( 2) 00:08:06.540 14.277 - 14.375: 97.8452% ( 6) 00:08:06.540 14.375 - 14.474: 97.8917% ( 7) 00:08:06.540 14.474 - 14.572: 97.9183% ( 4) 00:08:06.540 14.572 - 14.671: 97.9449% ( 4) 00:08:06.540 14.671 - 14.769: 97.9516% ( 1) 00:08:06.540 14.769 - 14.868: 97.9582% ( 1) 00:08:06.540 14.868 - 14.966: 97.9715% ( 2) 00:08:06.540 14.966 - 15.065: 97.9848% ( 2) 00:08:06.540 15.065 - 15.163: 98.0380% ( 8) 00:08:06.540 15.163 - 15.262: 98.0646% ( 4) 00:08:06.540 15.262 - 15.360: 98.0779% ( 2) 00:08:06.540 15.360 - 15.458: 98.1112% ( 5) 00:08:06.540 15.458 - 15.557: 98.1179% ( 1) 00:08:06.540 15.557 - 15.655: 98.1378% ( 3) 00:08:06.540 15.655 - 15.754: 98.1445% ( 1) 00:08:06.540 15.754 - 15.852: 98.1578% ( 2) 00:08:06.540 15.852 - 15.951: 98.1644% ( 1) 00:08:06.540 15.951 - 16.049: 98.1777% ( 2) 00:08:06.540 16.049 - 16.148: 98.2110% ( 5) 00:08:06.540 16.246 - 16.345: 98.2176% ( 1) 00:08:06.540 16.345 - 16.443: 98.2243% ( 1) 00:08:06.540 16.542 - 16.640: 98.2442% ( 3) 00:08:06.540 16.640 - 16.738: 98.2642% ( 3) 00:08:06.540 16.738 - 16.837: 98.3639% ( 15) 00:08:06.540 16.837 - 16.935: 98.5169% ( 23) 00:08:06.540 16.935 - 17.034: 98.6300% ( 17) 00:08:06.540 17.034 - 17.132: 98.7164% ( 13) 00:08:06.540 17.132 - 17.231: 98.8095% ( 14) 00:08:06.540 17.231 - 17.329: 98.8960% ( 13) 00:08:06.540 17.329 - 17.428: 98.9292% ( 5) 00:08:06.540 17.428 - 17.526: 99.0223% ( 14) 00:08:06.540 17.526 - 17.625: 99.0756% ( 8) 00:08:06.540 17.625 - 17.723: 99.1886% ( 17) 00:08:06.540 17.723 - 17.822: 99.2884% ( 15) 00:08:06.540 17.822 - 17.920: 99.3216% ( 5) 00:08:06.540 17.920 - 18.018: 99.4214% ( 15) 00:08:06.540 18.018 - 18.117: 99.4812% ( 9) 00:08:06.540 18.117 - 18.215: 99.5145% ( 5) 00:08:06.540 18.215 - 18.314: 99.5677% ( 8) 00:08:06.540 18.314 - 18.412: 99.5877% ( 3) 00:08:06.540 18.412 - 18.511: 99.6276% ( 6) 00:08:06.540 18.511 - 18.609: 99.6475% ( 3) 00:08:06.540 18.609 - 18.708: 99.6741% ( 4) 00:08:06.540 18.708 - 18.806: 99.6808% ( 1) 00:08:06.540 18.806 - 18.905: 99.6874% ( 1) 00:08:06.540 18.905 - 19.003: 99.7007% ( 2) 00:08:06.540 19.003 - 19.102: 99.7074% ( 1) 00:08:06.540 19.200 - 19.298: 99.7140% ( 1) 00:08:06.540 19.298 - 19.397: 99.7207% ( 1) 00:08:06.540 19.495 - 19.594: 99.7539% ( 5) 00:08:06.540 19.594 - 19.692: 99.7606% ( 1) 00:08:06.540 19.692 - 19.791: 99.7672% ( 1) 00:08:06.540 19.889 - 19.988: 99.7872% ( 3) 00:08:06.540 19.988 - 20.086: 99.7938% ( 1) 00:08:06.540 20.086 - 20.185: 99.8005% ( 1) 00:08:06.540 20.185 - 20.283: 99.8071% ( 1) 00:08:06.540 20.677 - 20.775: 99.8204% ( 2) 00:08:06.540 20.874 - 20.972: 99.8271% ( 1) 00:08:06.540 21.268 - 21.366: 99.8337% ( 1) 00:08:06.540 21.366 - 21.465: 99.8404% ( 1) 00:08:06.540 21.563 - 21.662: 99.8537% ( 2) 00:08:06.540 21.662 - 21.760: 99.8670% ( 2) 00:08:06.540 21.858 - 21.957: 99.8736% ( 1) 00:08:06.540 22.351 - 22.449: 99.8803% ( 1) 00:08:06.540 22.449 - 22.548: 99.8869% ( 1) 00:08:06.540 22.548 - 22.646: 99.8936% ( 1) 00:08:06.540 22.942 - 23.040: 99.9002% ( 1) 00:08:06.540 23.237 - 23.335: 99.9069% ( 1) 00:08:06.540 23.828 - 23.926: 99.9135% ( 1) 00:08:06.540 23.926 - 24.025: 99.9268% ( 2) 00:08:06.540 24.517 - 24.615: 99.9335% ( 1) 00:08:06.540 25.600 - 25.797: 99.9401% ( 1) 00:08:06.540 27.963 - 28.160: 99.9468% ( 1) 00:08:06.540 29.735 - 29.932: 99.9534% ( 1) 00:08:06.540 31.902 - 32.098: 99.9601% ( 1) 00:08:06.540 36.825 - 37.022: 99.9667% ( 1) 00:08:06.540 42.732 - 42.929: 99.9734% ( 1) 00:08:06.540 46.080 - 46.277: 99.9800% ( 1) 00:08:06.540 49.231 - 49.428: 99.9867% ( 1) 00:08:06.540 55.138 - 55.532: 99.9933% ( 1) 00:08:06.540 356.037 - 357.612: 100.0000% ( 1) 00:08:06.540 00:08:06.540 Complete histogram 00:08:06.540 ================== 00:08:06.540 Range in us Cumulative Count 00:08:06.540 7.335 - 7.385: 0.0067% ( 1) 00:08:06.540 7.385 - 7.434: 0.3458% ( 51) 00:08:06.540 7.434 - 7.483: 3.5182% ( 477) 00:08:06.540 7.483 - 7.532: 15.4097% ( 1788) 00:08:06.540 7.532 - 7.582: 35.2221% ( 2979) 00:08:06.540 7.582 - 7.631: 53.7576% ( 2787) 00:08:06.540 7.631 - 7.680: 65.9151% ( 1828) 00:08:06.540 7.680 - 7.729: 73.2841% ( 1108) 00:08:06.540 7.729 - 7.778: 77.8399% ( 685) 00:08:06.540 7.778 - 7.828: 80.3738% ( 381) 00:08:06.540 7.828 - 7.877: 81.8436% ( 221) 00:08:06.540 7.877 - 7.926: 82.6816% ( 126) 00:08:06.540 7.926 - 7.975: 83.0673% ( 58) 00:08:06.540 7.975 - 8.025: 83.2535% ( 28) 00:08:06.540 8.025 - 8.074: 83.7523% ( 75) 00:08:06.540 8.074 - 8.123: 85.0027% ( 188) 00:08:06.540 8.123 - 8.172: 86.4326% ( 215) 00:08:06.540 8.172 - 8.222: 87.7228% ( 194) 00:08:06.540 8.222 - 8.271: 89.9175% ( 330) 00:08:06.540 8.271 - 8.320: 92.8438% ( 440) 00:08:06.540 8.320 - 8.369: 94.7526% ( 287) 00:08:06.540 8.369 - 8.418: 95.9364% ( 178) 00:08:06.540 8.418 - 8.468: 96.9606% ( 154) 00:08:06.540 8.468 - 8.517: 97.6124% ( 98) 00:08:06.540 8.517 - 8.566: 97.9183% ( 46) 00:08:06.540 8.566 - 8.615: 98.1977% ( 42) 00:08:06.540 8.615 - 8.665: 98.3440% ( 22) 00:08:06.540 8.665 - 8.714: 98.4371% ( 14) 00:08:06.540 8.714 - 8.763: 98.5169% ( 12) 00:08:06.540 8.763 - 8.812: 98.5501% ( 5) 00:08:06.540 8.812 - 8.862: 98.5701% ( 3) 00:08:06.540 8.862 - 8.911: 98.5967% ( 4) 00:08:06.540 9.058 - 9.108: 98.6100% ( 2) 00:08:06.540 9.305 - 9.354: 98.6233% ( 2) 00:08:06.540 9.551 - 9.600: 98.6300% ( 1) 00:08:06.540 9.600 - 9.649: 98.6366% ( 1) 00:08:06.540 9.649 - 9.698: 98.6433% ( 1) 00:08:06.541 9.748 - 9.797: 98.6566% ( 2) 00:08:06.541 9.994 - 10.043: 98.6632% ( 1) 00:08:06.541 10.289 - 10.338: 98.6699% ( 1) 00:08:06.541 10.437 - 10.486: 98.6765% ( 1) 00:08:06.541 10.535 - 10.585: 98.6832% ( 1) 00:08:06.541 10.732 - 10.782: 98.6898% ( 1) 00:08:06.541 11.520 - 11.569: 98.6965% ( 1) 00:08:06.541 11.766 - 11.815: 98.7031% ( 1) 00:08:06.541 11.963 - 12.012: 98.7098% ( 1) 00:08:06.541 12.455 - 12.505: 98.7164% ( 1) 00:08:06.541 12.554 - 12.603: 98.7231% ( 1) 00:08:06.541 12.702 - 12.800: 98.7364% ( 2) 00:08:06.541 12.800 - 12.898: 98.7430% ( 1) 00:08:06.541 12.898 - 12.997: 98.7630% ( 3) 00:08:06.541 12.997 - 13.095: 98.8228% ( 9) 00:08:06.541 13.095 - 13.194: 98.8960% ( 11) 00:08:06.541 13.194 - 13.292: 98.9625% ( 10) 00:08:06.541 13.292 - 13.391: 99.0356% ( 11) 00:08:06.541 13.391 - 13.489: 99.1221% ( 13) 00:08:06.541 13.489 - 13.588: 99.1753% ( 8) 00:08:06.541 13.588 - 13.686: 99.2485% ( 11) 00:08:06.541 13.686 - 13.785: 99.2884% ( 6) 00:08:06.541 13.785 - 13.883: 99.3549% ( 10) 00:08:06.541 13.883 - 13.982: 99.4081% ( 8) 00:08:06.541 13.982 - 14.080: 99.4879% ( 12) 00:08:06.541 14.080 - 14.178: 99.5411% ( 8) 00:08:06.541 14.178 - 14.277: 99.5877% ( 7) 00:08:06.541 14.277 - 14.375: 99.6741% ( 13) 00:08:06.541 14.375 - 14.474: 99.6874% ( 2) 00:08:06.541 14.474 - 14.572: 99.7140% ( 4) 00:08:06.541 14.572 - 14.671: 99.7207% ( 1) 00:08:06.541 14.671 - 14.769: 99.7473% ( 4) 00:08:06.541 14.769 - 14.868: 99.7539% ( 1) 00:08:06.541 14.868 - 14.966: 99.7672% ( 2) 00:08:06.541 14.966 - 15.065: 99.7872% ( 3) 00:08:06.541 15.065 - 15.163: 99.8005% ( 2) 00:08:06.541 15.163 - 15.262: 99.8138% ( 2) 00:08:06.541 15.262 - 15.360: 99.8204% ( 1) 00:08:06.541 15.458 - 15.557: 99.8271% ( 1) 00:08:06.541 15.655 - 15.754: 99.8337% ( 1) 00:08:06.541 16.148 - 16.246: 99.8404% ( 1) 00:08:06.541 16.345 - 16.443: 99.8470% ( 1) 00:08:06.541 16.738 - 16.837: 99.8537% ( 1) 00:08:06.541 16.837 - 16.935: 99.8670% ( 2) 00:08:06.541 17.920 - 18.018: 99.8736% ( 1) 00:08:06.541 18.215 - 18.314: 99.8803% ( 1) 00:08:06.541 18.511 - 18.609: 99.8869% ( 1) 00:08:06.541 18.806 - 18.905: 99.8936% ( 1) 00:08:06.541 18.905 - 19.003: 99.9069% ( 2) 00:08:06.541 19.102 - 19.200: 99.9135% ( 1) 00:08:06.541 19.495 - 19.594: 99.9202% ( 1) 00:08:06.541 19.889 - 19.988: 99.9268% ( 1) 00:08:06.541 20.382 - 20.480: 99.9335% ( 1) 00:08:06.541 20.874 - 20.972: 99.9401% ( 1) 00:08:06.541 21.071 - 21.169: 99.9468% ( 1) 00:08:06.541 21.858 - 21.957: 99.9534% ( 1) 00:08:06.541 23.434 - 23.532: 99.9601% ( 1) 00:08:06.541 25.206 - 25.403: 99.9667% ( 1) 00:08:06.541 25.797 - 25.994: 99.9734% ( 1) 00:08:06.541 28.751 - 28.948: 99.9800% ( 1) 00:08:06.541 46.277 - 46.474: 99.9867% ( 1) 00:08:06.541 57.502 - 57.895: 99.9933% ( 1) 00:08:06.541 68.923 - 69.317: 100.0000% ( 1) 00:08:06.541 00:08:06.541 ************************************ 00:08:06.541 END TEST nvme_overhead 00:08:06.541 ************************************ 00:08:06.541 00:08:06.541 real 0m1.217s 00:08:06.541 user 0m1.071s 00:08:06.541 sys 0m0.101s 00:08:06.541 09:20:31 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.541 09:20:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:06.541 09:20:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:06.541 09:20:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:06.541 09:20:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.541 09:20:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.541 ************************************ 00:08:06.541 START TEST nvme_arbitration 00:08:06.541 ************************************ 00:08:06.541 09:20:31 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:09.832 Initializing NVMe Controllers 00:08:09.832 Attached to 0000:00:13.0 00:08:09.832 Attached to 0000:00:10.0 00:08:09.832 Attached to 0000:00:11.0 00:08:09.832 Attached to 0000:00:12.0 00:08:09.832 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:09.832 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:09.832 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:09.832 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:09.832 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:09.832 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:09.832 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:09.832 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:09.832 Initialization complete. Launching workers. 00:08:09.832 Starting thread on core 1 with urgent priority queue 00:08:09.832 Starting thread on core 2 with urgent priority queue 00:08:09.832 Starting thread on core 3 with urgent priority queue 00:08:09.832 Starting thread on core 0 with urgent priority queue 00:08:09.832 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:09.832 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:09.832 QEMU NVMe Ctrl (12340 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:09.832 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:09.832 QEMU NVMe Ctrl (12341 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:08:09.832 QEMU NVMe Ctrl (12342 ) core 3: 1002.67 IO/s 99.73 secs/100000 ios 00:08:09.832 ======================================================== 00:08:09.832 00:08:09.832 00:08:09.832 real 0m3.310s 00:08:09.832 user 0m9.274s 00:08:09.832 sys 0m0.110s 00:08:09.832 ************************************ 00:08:09.832 END TEST nvme_arbitration 00:08:09.832 ************************************ 00:08:09.832 09:20:34 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.832 09:20:34 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:09.832 09:20:34 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:09.832 09:20:34 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.832 09:20:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.832 09:20:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.832 ************************************ 00:08:09.832 START TEST nvme_single_aen 00:08:09.832 ************************************ 00:08:09.832 09:20:34 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:09.832 Asynchronous Event Request test 00:08:09.832 Attached to 0000:00:13.0 00:08:09.832 Attached to 0000:00:10.0 00:08:09.832 Attached to 0000:00:11.0 00:08:09.832 Attached to 0000:00:12.0 00:08:09.832 Reset controller to setup AER completions for this process 00:08:09.832 Registering asynchronous event callbacks... 00:08:09.832 Getting orig temperature thresholds of all controllers 00:08:09.832 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.832 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.832 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.832 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.832 Setting all controllers temperature threshold low to trigger AER 00:08:09.832 Waiting for all controllers temperature threshold to be set lower 00:08:09.832 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.832 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:09.832 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.832 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:09.832 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.832 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:09.832 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.832 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:09.832 Waiting for all controllers to trigger AER and reset threshold 00:08:09.832 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.832 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.832 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.832 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.832 Cleaning up... 00:08:09.832 ************************************ 00:08:09.832 END TEST nvme_single_aen 00:08:09.832 ************************************ 00:08:09.832 00:08:09.832 real 0m0.220s 00:08:09.832 user 0m0.062s 00:08:09.832 sys 0m0.110s 00:08:09.832 09:20:35 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.832 09:20:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:09.832 09:20:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:09.832 09:20:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.832 09:20:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.832 09:20:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.832 ************************************ 00:08:09.832 START TEST nvme_doorbell_aers 00:08:09.832 ************************************ 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:09.832 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:10.093 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:10.093 09:20:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:10.093 09:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.093 09:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:10.093 [2024-11-20 09:20:35.531072] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:20.174 Executing: test_write_invalid_db 00:08:20.174 Waiting for AER completion... 00:08:20.174 Failure: test_write_invalid_db 00:08:20.174 00:08:20.174 Executing: test_invalid_db_write_overflow_sq 00:08:20.174 Waiting for AER completion... 00:08:20.174 Failure: test_invalid_db_write_overflow_sq 00:08:20.174 00:08:20.174 Executing: test_invalid_db_write_overflow_cq 00:08:20.174 Waiting for AER completion... 00:08:20.174 Failure: test_invalid_db_write_overflow_cq 00:08:20.174 00:08:20.174 09:20:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.174 09:20:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:20.174 [2024-11-20 09:20:45.561910] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:30.401 Executing: test_write_invalid_db 00:08:30.401 Waiting for AER completion... 00:08:30.401 Failure: test_write_invalid_db 00:08:30.401 00:08:30.401 Executing: test_invalid_db_write_overflow_sq 00:08:30.401 Waiting for AER completion... 00:08:30.401 Failure: test_invalid_db_write_overflow_sq 00:08:30.401 00:08:30.401 Executing: test_invalid_db_write_overflow_cq 00:08:30.401 Waiting for AER completion... 00:08:30.401 Failure: test_invalid_db_write_overflow_cq 00:08:30.401 00:08:30.401 09:20:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.401 09:20:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:30.401 [2024-11-20 09:20:55.602170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:40.393 Executing: test_write_invalid_db 00:08:40.393 Waiting for AER completion... 00:08:40.393 Failure: test_write_invalid_db 00:08:40.393 00:08:40.393 Executing: test_invalid_db_write_overflow_sq 00:08:40.393 Waiting for AER completion... 00:08:40.393 Failure: test_invalid_db_write_overflow_sq 00:08:40.393 00:08:40.393 Executing: test_invalid_db_write_overflow_cq 00:08:40.393 Waiting for AER completion... 00:08:40.393 Failure: test_invalid_db_write_overflow_cq 00:08:40.393 00:08:40.393 09:21:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:40.393 09:21:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:40.393 [2024-11-20 09:21:05.633769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 Executing: test_write_invalid_db 00:08:50.359 Waiting for AER completion... 00:08:50.359 Failure: test_write_invalid_db 00:08:50.359 00:08:50.359 Executing: test_invalid_db_write_overflow_sq 00:08:50.359 Waiting for AER completion... 00:08:50.359 Failure: test_invalid_db_write_overflow_sq 00:08:50.359 00:08:50.359 Executing: test_invalid_db_write_overflow_cq 00:08:50.359 Waiting for AER completion... 00:08:50.359 Failure: test_invalid_db_write_overflow_cq 00:08:50.359 00:08:50.359 ************************************ 00:08:50.359 END TEST nvme_doorbell_aers 00:08:50.359 ************************************ 00:08:50.359 00:08:50.359 real 0m40.180s 00:08:50.359 user 0m34.067s 00:08:50.359 sys 0m5.700s 00:08:50.359 09:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.359 09:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 09:21:15 nvme -- nvme/nvme.sh@97 -- # uname 00:08:50.359 09:21:15 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:50.359 09:21:15 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:50.359 09:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:50.359 09:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.359 09:21:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 ************************************ 00:08:50.359 START TEST nvme_multi_aen 00:08:50.359 ************************************ 00:08:50.359 09:21:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:50.359 [2024-11-20 09:21:15.691282] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.691356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.691366] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.692796] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.692823] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.692831] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.693803] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.693825] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.693833] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.695034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.695147] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 [2024-11-20 09:21:15.695212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63296) is not found. Dropping the request. 00:08:50.359 Child process pid: 63817 00:08:50.617 [Child] Asynchronous Event Request test 00:08:50.617 [Child] Attached to 0000:00:13.0 00:08:50.617 [Child] Attached to 0000:00:10.0 00:08:50.617 [Child] Attached to 0000:00:11.0 00:08:50.617 [Child] Attached to 0000:00:12.0 00:08:50.617 [Child] Registering asynchronous event callbacks... 00:08:50.617 [Child] Getting orig temperature thresholds of all controllers 00:08:50.617 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:50.617 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.617 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.617 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.617 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.617 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.617 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.617 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.617 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.617 [Child] Cleaning up... 00:08:50.617 Asynchronous Event Request test 00:08:50.617 Attached to 0000:00:13.0 00:08:50.617 Attached to 0000:00:10.0 00:08:50.617 Attached to 0000:00:11.0 00:08:50.617 Attached to 0000:00:12.0 00:08:50.617 Reset controller to setup AER completions for this process 00:08:50.617 Registering asynchronous event callbacks... 00:08:50.617 Getting orig temperature thresholds of all controllers 00:08:50.617 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.617 Setting all controllers temperature threshold low to trigger AER 00:08:50.617 Waiting for all controllers temperature threshold to be set lower 00:08:50.617 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.617 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:50.617 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.617 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:50.618 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.618 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:50.618 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.618 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:50.618 Waiting for all controllers to trigger AER and reset threshold 00:08:50.618 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.618 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.618 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.618 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.618 Cleaning up... 00:08:50.618 00:08:50.618 real 0m0.469s 00:08:50.618 user 0m0.153s 00:08:50.618 sys 0m0.202s 00:08:50.618 09:21:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.618 09:21:15 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:50.618 ************************************ 00:08:50.618 END TEST nvme_multi_aen 00:08:50.618 ************************************ 00:08:50.618 09:21:15 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:50.618 09:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:50.618 09:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.618 09:21:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.618 ************************************ 00:08:50.618 START TEST nvme_startup 00:08:50.618 ************************************ 00:08:50.618 09:21:15 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:50.878 Initializing NVMe Controllers 00:08:50.878 Attached to 0000:00:13.0 00:08:50.878 Attached to 0000:00:10.0 00:08:50.878 Attached to 0000:00:11.0 00:08:50.878 Attached to 0000:00:12.0 00:08:50.878 Initialization complete. 00:08:50.878 Time used:160349.875 (us). 00:08:50.878 00:08:50.878 real 0m0.231s 00:08:50.878 user 0m0.074s 00:08:50.878 sys 0m0.110s 00:08:50.878 09:21:16 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.878 09:21:16 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:50.878 ************************************ 00:08:50.878 END TEST nvme_startup 00:08:50.878 ************************************ 00:08:50.878 09:21:16 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:50.878 09:21:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.878 09:21:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.878 09:21:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.878 ************************************ 00:08:50.878 START TEST nvme_multi_secondary 00:08:50.878 ************************************ 00:08:50.878 09:21:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:50.878 09:21:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63873 00:08:50.878 09:21:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:50.878 09:21:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63874 00:08:50.878 09:21:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:50.878 09:21:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:54.174 Initializing NVMe Controllers 00:08:54.174 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.174 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.174 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.174 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.174 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:54.174 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:54.174 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:54.174 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:54.174 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:54.174 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:54.174 Initialization complete. Launching workers. 00:08:54.174 ======================================================== 00:08:54.174 Latency(us) 00:08:54.174 Device Information : IOPS MiB/s Average min max 00:08:54.174 PCIE (0000:00:13.0) NSID 1 from core 2: 2057.45 8.04 7776.10 1953.93 17511.42 00:08:54.174 PCIE (0000:00:10.0) NSID 1 from core 2: 2057.45 8.04 7785.76 1694.21 16822.77 00:08:54.174 PCIE (0000:00:11.0) NSID 1 from core 2: 2057.45 8.04 7786.93 1709.04 17183.72 00:08:54.174 PCIE (0000:00:12.0) NSID 1 from core 2: 2057.45 8.04 7787.62 1582.61 17921.13 00:08:54.174 PCIE (0000:00:12.0) NSID 2 from core 2: 2057.45 8.04 7788.08 1415.63 19694.90 00:08:54.174 PCIE (0000:00:12.0) NSID 3 from core 2: 2057.45 8.04 7788.54 1950.51 19450.05 00:08:54.174 ======================================================== 00:08:54.174 Total : 12344.71 48.22 7785.51 1415.63 19694.90 00:08:54.174 00:08:54.435 Initializing NVMe Controllers 00:08:54.435 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.435 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.435 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.435 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.435 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:54.435 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:54.435 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:54.435 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:54.435 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:54.435 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:54.435 Initialization complete. Launching workers. 00:08:54.435 ======================================================== 00:08:54.435 Latency(us) 00:08:54.435 Device Information : IOPS MiB/s Average min max 00:08:54.435 PCIE (0000:00:13.0) NSID 1 from core 1: 4474.35 17.48 3575.41 1219.40 10522.36 00:08:54.435 PCIE (0000:00:10.0) NSID 1 from core 1: 4474.35 17.48 3574.45 1188.95 10179.72 00:08:54.435 PCIE (0000:00:11.0) NSID 1 from core 1: 4474.35 17.48 3575.60 1211.89 10892.82 00:08:54.435 PCIE (0000:00:12.0) NSID 1 from core 1: 4474.35 17.48 3575.79 1079.40 10805.83 00:08:54.435 PCIE (0000:00:12.0) NSID 2 from core 1: 4474.35 17.48 3575.72 1190.53 11881.15 00:08:54.435 PCIE (0000:00:12.0) NSID 3 from core 1: 4474.35 17.48 3575.94 1094.48 12240.84 00:08:54.435 ======================================================== 00:08:54.435 Total : 26846.08 104.87 3575.48 1079.40 12240.84 00:08:54.435 00:08:54.435 09:21:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63873 00:08:56.343 Initializing NVMe Controllers 00:08:56.343 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.343 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.343 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.343 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.343 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.343 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.343 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.343 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.343 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.343 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.344 Initialization complete. Launching workers. 00:08:56.344 ======================================================== 00:08:56.344 Latency(us) 00:08:56.344 Device Information : IOPS MiB/s Average min max 00:08:56.344 PCIE (0000:00:13.0) NSID 1 from core 0: 6926.94 27.06 2309.38 976.40 7729.11 00:08:56.344 PCIE (0000:00:10.0) NSID 1 from core 0: 6926.94 27.06 2308.50 899.68 7929.67 00:08:56.344 PCIE (0000:00:11.0) NSID 1 from core 0: 6926.94 27.06 2309.44 940.80 7914.35 00:08:56.344 PCIE (0000:00:12.0) NSID 1 from core 0: 6926.94 27.06 2309.40 897.69 8111.99 00:08:56.344 PCIE (0000:00:12.0) NSID 2 from core 0: 6926.74 27.06 2309.44 943.78 7983.75 00:08:56.344 PCIE (0000:00:12.0) NSID 3 from core 0: 6926.94 27.06 2309.32 902.35 7868.78 00:08:56.344 ======================================================== 00:08:56.344 Total : 41561.42 162.35 2309.24 897.69 8111.99 00:08:56.344 00:08:56.344 09:21:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63874 00:08:56.344 09:21:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63943 00:08:56.344 09:21:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:56.344 09:21:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63944 00:08:56.344 09:21:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:56.344 09:21:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:59.637 Initializing NVMe Controllers 00:08:59.637 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.637 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.637 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.637 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.637 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:59.637 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:59.637 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:59.637 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:59.637 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:59.637 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:59.637 Initialization complete. Launching workers. 00:08:59.638 ======================================================== 00:08:59.638 Latency(us) 00:08:59.638 Device Information : IOPS MiB/s Average min max 00:08:59.638 PCIE (0000:00:13.0) NSID 1 from core 0: 3737.42 14.60 4280.52 1163.68 11562.41 00:08:59.638 PCIE (0000:00:10.0) NSID 1 from core 0: 3737.09 14.60 4279.96 1193.31 11019.22 00:08:59.638 PCIE (0000:00:11.0) NSID 1 from core 0: 3737.42 14.60 4281.25 1171.56 11432.42 00:08:59.638 PCIE (0000:00:12.0) NSID 1 from core 0: 3737.42 14.60 4281.62 1122.13 11253.15 00:08:59.638 PCIE (0000:00:12.0) NSID 2 from core 0: 3737.42 14.60 4282.23 1097.19 10787.91 00:08:59.638 PCIE (0000:00:12.0) NSID 3 from core 0: 3737.42 14.60 4282.82 1164.27 10834.03 00:08:59.638 ======================================================== 00:08:59.638 Total : 22424.21 87.59 4281.40 1097.19 11562.41 00:08:59.638 00:08:59.638 Initializing NVMe Controllers 00:08:59.638 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.638 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.638 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.638 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.638 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:59.638 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:59.638 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:59.638 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:59.638 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:59.638 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:59.638 Initialization complete. Launching workers. 00:08:59.638 ======================================================== 00:08:59.638 Latency(us) 00:08:59.638 Device Information : IOPS MiB/s Average min max 00:08:59.638 PCIE (0000:00:13.0) NSID 1 from core 1: 3449.13 13.47 4638.21 1148.18 12496.92 00:08:59.638 PCIE (0000:00:10.0) NSID 1 from core 1: 3449.13 13.47 4636.78 1153.49 12768.68 00:08:59.638 PCIE (0000:00:11.0) NSID 1 from core 1: 3449.13 13.47 4637.99 1110.04 12557.84 00:08:59.638 PCIE (0000:00:12.0) NSID 1 from core 1: 3449.13 13.47 4637.87 1002.58 12117.44 00:08:59.638 PCIE (0000:00:12.0) NSID 2 from core 1: 3449.13 13.47 4637.78 831.65 10854.00 00:08:59.638 PCIE (0000:00:12.0) NSID 3 from core 1: 3449.13 13.47 4637.67 813.91 11343.74 00:08:59.638 ======================================================== 00:08:59.638 Total : 20694.76 80.84 4637.72 813.91 12768.68 00:08:59.638 00:09:01.545 Initializing NVMe Controllers 00:09:01.545 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.545 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.545 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.545 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.545 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:01.545 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:01.545 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:01.545 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:01.545 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:01.545 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:01.545 Initialization complete. Launching workers. 00:09:01.545 ======================================================== 00:09:01.545 Latency(us) 00:09:01.545 Device Information : IOPS MiB/s Average min max 00:09:01.545 PCIE (0000:00:13.0) NSID 1 from core 2: 2698.70 10.54 5928.41 1210.90 23894.38 00:09:01.545 PCIE (0000:00:10.0) NSID 1 from core 2: 2698.70 10.54 5927.34 1164.94 23105.28 00:09:01.545 PCIE (0000:00:11.0) NSID 1 from core 2: 2698.70 10.54 5928.48 996.01 20977.26 00:09:01.545 PCIE (0000:00:12.0) NSID 1 from core 2: 2698.70 10.54 5928.42 1123.28 23559.37 00:09:01.545 PCIE (0000:00:12.0) NSID 2 from core 2: 2698.70 10.54 5928.40 1170.48 22909.35 00:09:01.545 PCIE (0000:00:12.0) NSID 3 from core 2: 2698.70 10.54 5928.37 1001.51 22189.90 00:09:01.545 ======================================================== 00:09:01.545 Total : 16192.17 63.25 5928.24 996.01 23894.38 00:09:01.545 00:09:01.545 ************************************ 00:09:01.545 END TEST nvme_multi_secondary 00:09:01.545 ************************************ 00:09:01.545 09:21:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63943 00:09:01.545 09:21:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63944 00:09:01.545 00:09:01.545 real 0m10.630s 00:09:01.545 user 0m18.334s 00:09:01.545 sys 0m0.701s 00:09:01.545 09:21:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.545 09:21:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:01.545 09:21:26 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:01.545 09:21:26 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:01.545 09:21:26 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62893 ]] 00:09:01.545 09:21:26 nvme -- common/autotest_common.sh@1094 -- # kill 62893 00:09:01.545 09:21:26 nvme -- common/autotest_common.sh@1095 -- # wait 62893 00:09:01.545 [2024-11-20 09:21:26.940487] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.940583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.940621] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.940644] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.943571] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.943838] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.943867] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.943888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.946790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.946852] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.946873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.946895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.949550] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.949592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.949604] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.545 [2024-11-20 09:21:26.949616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63816) is not found. Dropping the request. 00:09:01.806 09:21:27 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:01.806 09:21:27 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:01.806 09:21:27 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:01.806 09:21:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.806 09:21:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.806 09:21:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.806 ************************************ 00:09:01.806 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:01.806 ************************************ 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:01.806 * Looking for test storage... 00:09:01.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.806 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.807 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:02.067 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:02.067 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.067 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:02.067 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:02.067 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:02.067 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64106 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64106 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64106 ']' 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.068 09:21:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.068 [2024-11-20 09:21:27.353033] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:02.068 [2024-11-20 09:21:27.353269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64106 ] 00:09:02.068 [2024-11-20 09:21:27.517083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.328 [2024-11-20 09:21:27.621631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.328 [2024-11-20 09:21:27.621871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.328 [2024-11-20 09:21:27.622244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.328 [2024-11-20 09:21:27.622260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.899 nvme0n1 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_DMloU.txt 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.899 true 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732094488 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64129 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:02.899 09:21:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.444 [2024-11-20 09:21:30.308280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:05.444 [2024-11-20 09:21:30.308542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:05.444 [2024-11-20 09:21:30.308566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:05.444 [2024-11-20 09:21:30.308579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.444 [2024-11-20 09:21:30.310392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:05.444 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64129 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64129 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64129 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_DMloU.txt 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_DMloU.txt 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64106 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64106 ']' 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64106 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64106 00:09:05.444 killing process with pid 64106 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64106' 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64106 00:09:05.444 09:21:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64106 00:09:06.829 09:21:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:06.829 09:21:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:06.829 00:09:06.829 real 0m4.851s 00:09:06.829 user 0m17.298s 00:09:06.829 sys 0m0.502s 00:09:06.829 09:21:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.829 ************************************ 00:09:06.829 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:06.829 ************************************ 00:09:06.829 09:21:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.829 09:21:31 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:06.829 09:21:31 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:06.829 09:21:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.829 09:21:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.829 09:21:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.829 ************************************ 00:09:06.829 START TEST nvme_fio 00:09:06.829 ************************************ 00:09:06.829 09:21:31 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:06.829 09:21:31 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:06.829 09:21:31 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:06.829 09:21:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:06.829 09:21:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:06.829 09:21:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:06.829 09:21:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:06.829 09:21:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.829 09:21:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:06.829 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:06.829 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:06.829 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.090 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.090 09:21:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.090 09:21:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.351 fio-3.35 00:09:07.351 Starting 1 thread 00:09:13.943 00:09:13.943 test: (groupid=0, jobs=1): err= 0: pid=64269: Wed Nov 20 09:21:38 2024 00:09:13.943 read: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(169MiB/2001msec) 00:09:13.943 slat (nsec): min=3373, max=62343, avg=5269.76, stdev=2346.22 00:09:13.943 clat (usec): min=651, max=8843, avg=2952.86, stdev=864.81 00:09:13.943 lat (usec): min=656, max=8859, avg=2958.13, stdev=866.07 00:09:13.943 clat percentiles (usec): 00:09:13.943 | 1.00th=[ 1991], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2442], 00:09:13.943 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:13.943 | 70.00th=[ 2900], 80.00th=[ 3163], 90.00th=[ 4015], 95.00th=[ 5014], 00:09:13.943 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7373], 00:09:13.943 | 99.99th=[ 8356] 00:09:13.943 bw ( KiB/s): min=86104, max=88840, per=100.00%, avg=87608.00, stdev=1388.13, samples=3 00:09:13.943 iops : min=21526, max=22210, avg=21902.00, stdev=347.03, samples=3 00:09:13.943 write: IOPS=21.4k, BW=83.8MiB/s (87.8MB/s)(168MiB/2001msec); 0 zone resets 00:09:13.943 slat (nsec): min=3425, max=63606, avg=5539.13, stdev=2376.19 00:09:13.943 clat (usec): min=257, max=8417, avg=2968.56, stdev=865.57 00:09:13.943 lat (usec): min=278, max=8422, avg=2974.10, stdev=866.81 00:09:13.943 clat percentiles (usec): 00:09:13.943 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:13.943 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:13.943 | 70.00th=[ 2900], 80.00th=[ 3163], 90.00th=[ 4047], 95.00th=[ 5080], 00:09:13.943 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7308], 00:09:13.943 | 99.99th=[ 8029] 00:09:13.943 bw ( KiB/s): min=86408, max=88648, per=100.00%, avg=87762.67, stdev=1191.47, samples=3 00:09:13.943 iops : min=21602, max=22162, avg=21940.67, stdev=297.87, samples=3 00:09:13.943 lat (usec) : 500=0.01%, 750=0.01% 00:09:13.943 lat (msec) : 2=0.99%, 4=88.78%, 10=10.22% 00:09:13.943 cpu : usr=99.05%, sys=0.10%, ctx=4, majf=0, minf=607 00:09:13.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.943 issued rwts: total=43244,42917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.943 00:09:13.943 Run status group 0 (all jobs): 00:09:13.943 READ: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=169MiB (177MB), run=2001-2001msec 00:09:13.943 WRITE: bw=83.8MiB/s (87.8MB/s), 83.8MiB/s-83.8MiB/s (87.8MB/s-87.8MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:13.943 ----------------------------------------------------- 00:09:13.943 Suppressions used: 00:09:13.943 count bytes template 00:09:13.943 1 32 /usr/src/fio/parse.c 00:09:13.943 1 8 libtcmalloc_minimal.so 00:09:13.943 ----------------------------------------------------- 00:09:13.943 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.943 09:21:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.943 09:21:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.943 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.943 fio-3.35 00:09:13.943 Starting 1 thread 00:09:20.518 00:09:20.518 test: (groupid=0, jobs=1): err= 0: pid=64330: Wed Nov 20 09:21:44 2024 00:09:20.518 read: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(166MiB/2001msec) 00:09:20.518 slat (nsec): min=3393, max=99040, avg=5324.52, stdev=2612.53 00:09:20.518 clat (usec): min=311, max=9128, avg=3012.71, stdev=851.24 00:09:20.518 lat (usec): min=318, max=9132, avg=3018.03, stdev=852.67 00:09:20.518 clat percentiles (usec): 00:09:20.518 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2573], 00:09:20.518 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2802], 00:09:20.518 | 70.00th=[ 2900], 80.00th=[ 3097], 90.00th=[ 4113], 95.00th=[ 5080], 00:09:20.518 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7635], 00:09:20.518 | 99.99th=[ 8225] 00:09:20.518 bw ( KiB/s): min=79472, max=86488, per=98.60%, avg=83597.33, stdev=3667.34, samples=3 00:09:20.518 iops : min=19868, max=21622, avg=20899.33, stdev=916.83, samples=3 00:09:20.518 write: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(165MiB/2001msec); 0 zone resets 00:09:20.518 slat (nsec): min=3519, max=92680, avg=5654.04, stdev=2572.74 00:09:20.518 clat (usec): min=361, max=8909, avg=3025.58, stdev=849.76 00:09:20.518 lat (usec): min=368, max=8914, avg=3031.24, stdev=851.19 00:09:20.518 clat percentiles (usec): 00:09:20.518 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:20.518 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2835], 00:09:20.518 | 70.00th=[ 2900], 80.00th=[ 3130], 90.00th=[ 4113], 95.00th=[ 5080], 00:09:20.518 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[ 7635], 00:09:20.518 | 99.99th=[ 8291] 00:09:20.518 bw ( KiB/s): min=79328, max=86840, per=99.38%, avg=83672.00, stdev=3891.63, samples=3 00:09:20.518 iops : min=19832, max=21710, avg=20918.00, stdev=972.91, samples=3 00:09:20.518 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:20.518 lat (msec) : 2=0.92%, 4=88.51%, 10=10.53% 00:09:20.518 cpu : usr=99.00%, sys=0.10%, ctx=5, majf=0, minf=607 00:09:20.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.518 issued rwts: total=42415,42120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.518 00:09:20.518 Run status group 0 (all jobs): 00:09:20.518 READ: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:20.518 WRITE: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:20.518 ----------------------------------------------------- 00:09:20.518 Suppressions used: 00:09:20.518 count bytes template 00:09:20.518 1 32 /usr/src/fio/parse.c 00:09:20.518 1 8 libtcmalloc_minimal.so 00:09:20.518 ----------------------------------------------------- 00:09:20.518 00:09:20.518 09:21:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.518 09:21:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:20.518 09:21:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:20.518 09:21:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.518 09:21:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.518 09:21:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:20.518 09:21:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.518 09:21:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.518 09:21:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.518 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:20.518 fio-3.35 00:09:20.518 Starting 1 thread 00:09:27.082 00:09:27.082 test: (groupid=0, jobs=1): err= 0: pid=64387: Wed Nov 20 09:21:51 2024 00:09:27.082 read: IOPS=19.3k, BW=75.3MiB/s (78.9MB/s)(151MiB/2001msec) 00:09:27.082 slat (nsec): min=3422, max=83944, avg=6378.21, stdev=2871.70 00:09:27.082 clat (usec): min=237, max=8521, avg=3309.24, stdev=938.95 00:09:27.082 lat (usec): min=242, max=8548, avg=3315.62, stdev=940.65 00:09:27.082 clat percentiles (usec): 00:09:27.082 | 1.00th=[ 2376], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:09:27.082 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3097], 00:09:27.082 | 70.00th=[ 3359], 80.00th=[ 3916], 90.00th=[ 4621], 95.00th=[ 5342], 00:09:27.082 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7570], 00:09:27.082 | 99.99th=[ 8291] 00:09:27.082 bw ( KiB/s): min=70640, max=82256, per=96.80%, avg=74618.67, stdev=6616.06, samples=3 00:09:27.082 iops : min=17660, max=20564, avg=18654.67, stdev=1654.01, samples=3 00:09:27.082 write: IOPS=19.2k, BW=75.2MiB/s (78.8MB/s)(150MiB/2001msec); 0 zone resets 00:09:27.082 slat (nsec): min=3591, max=75219, avg=6682.38, stdev=2829.89 00:09:27.082 clat (usec): min=228, max=8374, avg=3313.63, stdev=935.68 00:09:27.082 lat (usec): min=233, max=8389, avg=3320.31, stdev=937.36 00:09:27.082 clat percentiles (usec): 00:09:27.082 | 1.00th=[ 2376], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:09:27.082 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3097], 00:09:27.082 | 70.00th=[ 3392], 80.00th=[ 3916], 90.00th=[ 4621], 95.00th=[ 5276], 00:09:27.082 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7504], 00:09:27.082 | 99.99th=[ 8094] 00:09:27.082 bw ( KiB/s): min=70656, max=82408, per=97.02%, avg=74664.00, stdev=6707.88, samples=3 00:09:27.082 iops : min=17664, max=20602, avg=18666.00, stdev=1676.97, samples=3 00:09:27.082 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:27.082 lat (msec) : 2=0.20%, 4=81.06%, 10=18.70% 00:09:27.082 cpu : usr=98.90%, sys=0.15%, ctx=7, majf=0, minf=607 00:09:27.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:27.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.082 issued rwts: total=38561,38499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.082 00:09:27.082 Run status group 0 (all jobs): 00:09:27.082 READ: bw=75.3MiB/s (78.9MB/s), 75.3MiB/s-75.3MiB/s (78.9MB/s-78.9MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:27.082 WRITE: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=150MiB (158MB), run=2001-2001msec 00:09:27.082 ----------------------------------------------------- 00:09:27.082 Suppressions used: 00:09:27.082 count bytes template 00:09:27.082 1 32 /usr/src/fio/parse.c 00:09:27.082 1 8 libtcmalloc_minimal.so 00:09:27.082 ----------------------------------------------------- 00:09:27.082 00:09:27.083 09:21:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:27.083 09:21:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:27.083 09:21:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:27.083 09:21:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:27.083 09:21:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:27.083 09:21:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:27.083 09:21:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:27.083 09:21:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:27.083 09:21:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:27.083 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:27.083 fio-3.35 00:09:27.083 Starting 1 thread 00:09:37.075 00:09:37.075 test: (groupid=0, jobs=1): err= 0: pid=64453: Wed Nov 20 09:22:01 2024 00:09:37.075 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(164MiB/2001msec) 00:09:37.075 slat (nsec): min=3336, max=75681, avg=5495.46, stdev=2547.77 00:09:37.075 clat (usec): min=157, max=13049, avg=3051.40, stdev=875.76 00:09:37.075 lat (usec): min=160, max=13053, avg=3056.90, stdev=877.03 00:09:37.075 clat percentiles (usec): 00:09:37.075 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:37.075 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:09:37.075 | 70.00th=[ 3097], 80.00th=[ 3523], 90.00th=[ 4228], 95.00th=[ 4883], 00:09:37.075 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 8225], 99.95th=[10290], 00:09:37.075 | 99.99th=[12518] 00:09:37.075 bw ( KiB/s): min=74584, max=89320, per=98.16%, avg=82354.67, stdev=7400.94, samples=3 00:09:37.075 iops : min=18646, max=22330, avg=20588.67, stdev=1850.23, samples=3 00:09:37.075 write: IOPS=20.9k, BW=81.5MiB/s (85.4MB/s)(163MiB/2001msec); 0 zone resets 00:09:37.075 slat (nsec): min=3475, max=63517, avg=5747.03, stdev=2403.68 00:09:37.075 clat (usec): min=170, max=12876, avg=3047.09, stdev=858.92 00:09:37.075 lat (usec): min=173, max=12880, avg=3052.84, stdev=860.13 00:09:37.075 clat percentiles (usec): 00:09:37.075 | 1.00th=[ 1958], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:37.075 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2868], 00:09:37.075 | 70.00th=[ 3097], 80.00th=[ 3490], 90.00th=[ 4178], 95.00th=[ 4883], 00:09:37.075 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 8029], 99.95th=[ 9372], 00:09:37.075 | 99.99th=[12256] 00:09:37.075 bw ( KiB/s): min=74552, max=89296, per=98.77%, avg=82421.33, stdev=7422.16, samples=3 00:09:37.075 iops : min=18638, max=22324, avg=20605.33, stdev=1855.54, samples=3 00:09:37.075 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:37.075 lat (msec) : 2=1.16%, 4=86.94%, 10=11.79%, 20=0.05% 00:09:37.075 cpu : usr=99.00%, sys=0.10%, ctx=5, majf=0, minf=605 00:09:37.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:37.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.075 issued rwts: total=41970,41743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.075 00:09:37.075 Run status group 0 (all jobs): 00:09:37.075 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:37.075 WRITE: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:37.076 ----------------------------------------------------- 00:09:37.076 Suppressions used: 00:09:37.076 count bytes template 00:09:37.076 1 32 /usr/src/fio/parse.c 00:09:37.076 1 8 libtcmalloc_minimal.so 00:09:37.076 ----------------------------------------------------- 00:09:37.076 00:09:37.076 ************************************ 00:09:37.076 END TEST nvme_fio 00:09:37.076 ************************************ 00:09:37.076 09:22:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:37.076 09:22:02 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:37.076 00:09:37.076 real 0m30.223s 00:09:37.076 user 0m18.189s 00:09:37.076 sys 0m21.399s 00:09:37.076 09:22:02 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.076 09:22:02 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:37.076 ************************************ 00:09:37.076 END TEST nvme 00:09:37.076 ************************************ 00:09:37.076 00:09:37.076 real 1m40.708s 00:09:37.076 user 3m40.735s 00:09:37.076 sys 0m32.388s 00:09:37.076 09:22:02 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.076 09:22:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.076 09:22:02 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:37.076 09:22:02 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:37.076 09:22:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.076 09:22:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.076 09:22:02 -- common/autotest_common.sh@10 -- # set +x 00:09:37.076 ************************************ 00:09:37.076 START TEST nvme_scc 00:09:37.076 ************************************ 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:37.076 * Looking for test storage... 00:09:37.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.076 --rc genhtml_branch_coverage=1 00:09:37.076 --rc genhtml_function_coverage=1 00:09:37.076 --rc genhtml_legend=1 00:09:37.076 --rc geninfo_all_blocks=1 00:09:37.076 --rc geninfo_unexecuted_blocks=1 00:09:37.076 00:09:37.076 ' 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.076 --rc genhtml_branch_coverage=1 00:09:37.076 --rc genhtml_function_coverage=1 00:09:37.076 --rc genhtml_legend=1 00:09:37.076 --rc geninfo_all_blocks=1 00:09:37.076 --rc geninfo_unexecuted_blocks=1 00:09:37.076 00:09:37.076 ' 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.076 --rc genhtml_branch_coverage=1 00:09:37.076 --rc genhtml_function_coverage=1 00:09:37.076 --rc genhtml_legend=1 00:09:37.076 --rc geninfo_all_blocks=1 00:09:37.076 --rc geninfo_unexecuted_blocks=1 00:09:37.076 00:09:37.076 ' 00:09:37.076 09:22:02 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.076 --rc genhtml_branch_coverage=1 00:09:37.076 --rc genhtml_function_coverage=1 00:09:37.076 --rc genhtml_legend=1 00:09:37.076 --rc geninfo_all_blocks=1 00:09:37.076 --rc geninfo_unexecuted_blocks=1 00:09:37.076 00:09:37.076 ' 00:09:37.076 09:22:02 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.076 09:22:02 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.076 09:22:02 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.076 09:22:02 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.076 09:22:02 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.076 09:22:02 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:37.076 09:22:02 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:37.076 09:22:02 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:37.076 09:22:02 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.076 09:22:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:37.076 09:22:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:37.076 09:22:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:37.076 09:22:02 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.348 Waiting for block devices as requested 00:09:37.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.606 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.606 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.907 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:42.907 09:22:08 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:42.907 09:22:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:42.907 09:22:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:42.907 09:22:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.907 09:22:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.907 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.908 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:42.909 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:42.910 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.911 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:42.912 09:22:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:42.912 09:22:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:42.912 09:22:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.912 09:22:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.912 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.913 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:42.914 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:42.915 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.916 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:42.917 09:22:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:42.917 09:22:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:42.917 09:22:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.917 09:22:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:42.917 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:42.918 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:42.919 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.920 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.921 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:42.922 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.923 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:42.924 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:42.925 09:22:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:42.925 09:22:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:42.925 09:22:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.925 09:22:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:42.925 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.185 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:43.186 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.187 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:43.188 09:22:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:43.188 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:43.189 09:22:08 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:43.189 09:22:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:43.189 09:22:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:43.189 09:22:08 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.705 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.705 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.960 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.960 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.960 09:22:09 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:43.960 09:22:09 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.960 09:22:09 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.960 09:22:09 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:43.960 ************************************ 00:09:43.960 START TEST nvme_simple_copy 00:09:43.960 ************************************ 00:09:43.960 09:22:09 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:44.217 Initializing NVMe Controllers 00:09:44.217 Attaching to 0000:00:10.0 00:09:44.217 Controller supports SCC. Attached to 0000:00:10.0 00:09:44.217 Namespace ID: 1 size: 6GB 00:09:44.217 Initialization complete. 00:09:44.217 00:09:44.217 Controller QEMU NVMe Ctrl (12340 ) 00:09:44.218 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:44.218 Namespace Block Size:4096 00:09:44.218 Writing LBAs 0 to 63 with Random Data 00:09:44.218 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:44.218 LBAs matching Written Data: 64 00:09:44.218 00:09:44.218 real 0m0.255s 00:09:44.218 user 0m0.089s 00:09:44.218 sys 0m0.064s 00:09:44.218 09:22:09 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.218 ************************************ 00:09:44.218 END TEST nvme_simple_copy 00:09:44.218 ************************************ 00:09:44.218 09:22:09 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:44.218 00:09:44.218 real 0m7.303s 00:09:44.218 user 0m0.917s 00:09:44.218 sys 0m1.227s 00:09:44.218 09:22:09 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.218 09:22:09 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:44.218 ************************************ 00:09:44.218 END TEST nvme_scc 00:09:44.218 ************************************ 00:09:44.218 09:22:09 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:44.218 09:22:09 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:44.218 09:22:09 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:44.218 09:22:09 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:44.218 09:22:09 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:44.218 09:22:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.218 09:22:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.218 09:22:09 -- common/autotest_common.sh@10 -- # set +x 00:09:44.218 ************************************ 00:09:44.218 START TEST nvme_fdp 00:09:44.218 ************************************ 00:09:44.218 09:22:09 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:44.218 * Looking for test storage... 00:09:44.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:44.218 09:22:09 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.218 09:22:09 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.218 09:22:09 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.475 09:22:09 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.475 09:22:09 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.476 09:22:09 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.476 09:22:09 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:44.476 09:22:09 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.476 09:22:09 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.476 --rc genhtml_branch_coverage=1 00:09:44.476 --rc genhtml_function_coverage=1 00:09:44.476 --rc genhtml_legend=1 00:09:44.476 --rc geninfo_all_blocks=1 00:09:44.476 --rc geninfo_unexecuted_blocks=1 00:09:44.476 00:09:44.476 ' 00:09:44.476 09:22:09 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.476 --rc genhtml_branch_coverage=1 00:09:44.476 --rc genhtml_function_coverage=1 00:09:44.476 --rc genhtml_legend=1 00:09:44.476 --rc geninfo_all_blocks=1 00:09:44.476 --rc geninfo_unexecuted_blocks=1 00:09:44.476 00:09:44.476 ' 00:09:44.476 09:22:09 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.476 --rc genhtml_branch_coverage=1 00:09:44.476 --rc genhtml_function_coverage=1 00:09:44.476 --rc genhtml_legend=1 00:09:44.476 --rc geninfo_all_blocks=1 00:09:44.476 --rc geninfo_unexecuted_blocks=1 00:09:44.476 00:09:44.476 ' 00:09:44.476 09:22:09 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.476 --rc genhtml_branch_coverage=1 00:09:44.476 --rc genhtml_function_coverage=1 00:09:44.476 --rc genhtml_legend=1 00:09:44.476 --rc geninfo_all_blocks=1 00:09:44.476 --rc geninfo_unexecuted_blocks=1 00:09:44.476 00:09:44.476 ' 00:09:44.476 09:22:09 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.476 09:22:09 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.476 09:22:09 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.476 09:22:09 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.476 09:22:09 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.476 09:22:09 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.476 09:22:09 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.476 09:22:09 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.476 09:22:09 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:44.476 09:22:09 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:44.476 09:22:09 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:44.476 09:22:09 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.476 09:22:09 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.733 Waiting for block devices as requested 00:09:44.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.990 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.990 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.311 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.311 09:22:15 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:50.311 09:22:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.311 09:22:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:50.311 09:22:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.311 09:22:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.311 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.312 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:50.313 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:50.314 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.315 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:50.316 09:22:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.316 09:22:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:50.316 09:22:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.316 09:22:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.316 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.317 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.318 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.319 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:50.320 09:22:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.320 09:22:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:50.320 09:22:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.320 09:22:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:50.320 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.321 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.322 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.323 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.324 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:50.325 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.326 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.327 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:50.328 09:22:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.328 09:22:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:50.328 09:22:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.328 09:22:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:50.328 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:50.329 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:50.330 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.331 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:50.332 09:22:15 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:50.332 09:22:15 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:50.332 09:22:15 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.161 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.161 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.161 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.162 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.162 09:22:16 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:51.162 09:22:16 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.162 09:22:16 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.162 09:22:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:51.162 ************************************ 00:09:51.162 START TEST nvme_flexible_data_placement 00:09:51.162 ************************************ 00:09:51.162 09:22:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:51.420 Initializing NVMe Controllers 00:09:51.420 Attaching to 0000:00:13.0 00:09:51.420 Controller supports FDP Attached to 0000:00:13.0 00:09:51.420 Namespace ID: 1 Endurance Group ID: 1 00:09:51.420 Initialization complete. 00:09:51.420 00:09:51.420 ================================== 00:09:51.420 == FDP tests for Namespace: #01 == 00:09:51.420 ================================== 00:09:51.420 00:09:51.420 Get Feature: FDP: 00:09:51.420 ================= 00:09:51.420 Enabled: Yes 00:09:51.420 FDP configuration Index: 0 00:09:51.420 00:09:51.420 FDP configurations log page 00:09:51.420 =========================== 00:09:51.420 Number of FDP configurations: 1 00:09:51.420 Version: 0 00:09:51.420 Size: 112 00:09:51.420 FDP Configuration Descriptor: 0 00:09:51.420 Descriptor Size: 96 00:09:51.420 Reclaim Group Identifier format: 2 00:09:51.420 FDP Volatile Write Cache: Not Present 00:09:51.420 FDP Configuration: Valid 00:09:51.420 Vendor Specific Size: 0 00:09:51.420 Number of Reclaim Groups: 2 00:09:51.420 Number of Recalim Unit Handles: 8 00:09:51.420 Max Placement Identifiers: 128 00:09:51.420 Number of Namespaces Suppprted: 256 00:09:51.420 Reclaim unit Nominal Size: 6000000 bytes 00:09:51.420 Estimated Reclaim Unit Time Limit: Not Reported 00:09:51.420 RUH Desc #000: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #001: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #002: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #003: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #004: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #005: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #006: RUH Type: Initially Isolated 00:09:51.420 RUH Desc #007: RUH Type: Initially Isolated 00:09:51.420 00:09:51.420 FDP reclaim unit handle usage log page 00:09:51.420 ====================================== 00:09:51.420 Number of Reclaim Unit Handles: 8 00:09:51.420 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:51.420 RUH Usage Desc #001: RUH Attributes: Unused 00:09:51.420 RUH Usage Desc #002: RUH Attributes: Unused 00:09:51.420 RUH Usage Desc #003: RUH Attributes: Unused 00:09:51.420 RUH Usage Desc #004: RUH Attributes: Unused 00:09:51.420 RUH Usage Desc #005: RUH Attributes: Unused 00:09:51.420 RUH Usage Desc #006: RUH Attributes: Unused 00:09:51.420 RUH Usage Desc #007: RUH Attributes: Unused 00:09:51.420 00:09:51.420 FDP statistics log page 00:09:51.420 ======================= 00:09:51.420 Host bytes with metadata written: 816668672 00:09:51.420 Media bytes with metadata written: 816934912 00:09:51.420 Media bytes erased: 0 00:09:51.420 00:09:51.420 FDP Reclaim unit handle status 00:09:51.420 ============================== 00:09:51.420 Number of RUHS descriptors: 2 00:09:51.420 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000552a 00:09:51.420 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:51.420 00:09:51.420 FDP write on placement id: 0 success 00:09:51.420 00:09:51.420 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:51.420 00:09:51.420 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:51.420 00:09:51.420 Get Feature: FDP Events for Placement handle: #0 00:09:51.420 ======================== 00:09:51.420 Number of FDP Events: 6 00:09:51.420 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:51.420 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:51.420 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:51.420 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:51.420 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:51.420 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:51.420 00:09:51.420 FDP events log page 00:09:51.420 =================== 00:09:51.420 Number of FDP events: 1 00:09:51.420 FDP Event #0: 00:09:51.420 Event Type: RU Not Written to Capacity 00:09:51.420 Placement Identifier: Valid 00:09:51.420 NSID: Valid 00:09:51.420 Location: Valid 00:09:51.420 Placement Identifier: 0 00:09:51.420 Event Timestamp: 8 00:09:51.420 Namespace Identifier: 1 00:09:51.420 Reclaim Group Identifier: 0 00:09:51.420 Reclaim Unit Handle Identifier: 0 00:09:51.420 00:09:51.420 FDP test passed 00:09:51.420 00:09:51.420 real 0m0.247s 00:09:51.420 user 0m0.077s 00:09:51.420 sys 0m0.066s 00:09:51.420 09:22:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.420 09:22:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:51.420 ************************************ 00:09:51.420 END TEST nvme_flexible_data_placement 00:09:51.420 ************************************ 00:09:51.420 00:09:51.420 real 0m7.224s 00:09:51.420 user 0m0.942s 00:09:51.420 sys 0m1.235s 00:09:51.420 09:22:16 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.420 09:22:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:51.420 ************************************ 00:09:51.420 END TEST nvme_fdp 00:09:51.420 ************************************ 00:09:51.420 09:22:16 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:51.420 09:22:16 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:51.420 09:22:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.420 09:22:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.420 09:22:16 -- common/autotest_common.sh@10 -- # set +x 00:09:51.420 ************************************ 00:09:51.420 START TEST nvme_rpc 00:09:51.420 ************************************ 00:09:51.420 09:22:16 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:51.678 * Looking for test storage... 00:09:51.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.678 09:22:16 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.678 09:22:16 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.678 09:22:16 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.678 09:22:16 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.678 09:22:16 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.678 09:22:17 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:51.678 09:22:17 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.678 09:22:17 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.678 --rc genhtml_branch_coverage=1 00:09:51.678 --rc genhtml_function_coverage=1 00:09:51.678 --rc genhtml_legend=1 00:09:51.678 --rc geninfo_all_blocks=1 00:09:51.678 --rc geninfo_unexecuted_blocks=1 00:09:51.678 00:09:51.678 ' 00:09:51.678 09:22:17 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.678 --rc genhtml_branch_coverage=1 00:09:51.678 --rc genhtml_function_coverage=1 00:09:51.678 --rc genhtml_legend=1 00:09:51.678 --rc geninfo_all_blocks=1 00:09:51.678 --rc geninfo_unexecuted_blocks=1 00:09:51.678 00:09:51.678 ' 00:09:51.678 09:22:17 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.678 --rc genhtml_branch_coverage=1 00:09:51.678 --rc genhtml_function_coverage=1 00:09:51.678 --rc genhtml_legend=1 00:09:51.678 --rc geninfo_all_blocks=1 00:09:51.678 --rc geninfo_unexecuted_blocks=1 00:09:51.678 00:09:51.678 ' 00:09:51.678 09:22:17 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.678 --rc genhtml_branch_coverage=1 00:09:51.678 --rc genhtml_function_coverage=1 00:09:51.678 --rc genhtml_legend=1 00:09:51.678 --rc geninfo_all_blocks=1 00:09:51.679 --rc geninfo_unexecuted_blocks=1 00:09:51.679 00:09:51.679 ' 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65802 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:51.679 09:22:17 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65802 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65802 ']' 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.679 09:22:17 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.937 [2024-11-20 09:22:17.134249] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:51.937 [2024-11-20 09:22:17.134373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65802 ] 00:09:51.937 [2024-11-20 09:22:17.290728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.195 [2024-11-20 09:22:17.395187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.195 [2024-11-20 09:22:17.395486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.762 09:22:18 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.762 09:22:18 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:52.762 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:53.020 Nvme0n1 00:09:53.020 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:53.020 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:53.279 request: 00:09:53.279 { 00:09:53.279 "bdev_name": "Nvme0n1", 00:09:53.279 "filename": "non_existing_file", 00:09:53.279 "method": "bdev_nvme_apply_firmware", 00:09:53.279 "req_id": 1 00:09:53.279 } 00:09:53.279 Got JSON-RPC error response 00:09:53.279 response: 00:09:53.279 { 00:09:53.279 "code": -32603, 00:09:53.279 "message": "open file failed." 00:09:53.279 } 00:09:53.279 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:53.279 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:53.279 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:53.537 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:53.537 09:22:18 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65802 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65802 ']' 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65802 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65802 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.537 killing process with pid 65802 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65802' 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65802 00:09:53.537 09:22:18 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65802 00:09:54.909 00:09:54.909 real 0m3.423s 00:09:54.909 user 0m6.652s 00:09:54.909 sys 0m0.510s 00:09:54.909 09:22:20 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.909 09:22:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.909 ************************************ 00:09:54.909 END TEST nvme_rpc 00:09:54.909 ************************************ 00:09:54.909 09:22:20 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:54.909 09:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.909 09:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.909 09:22:20 -- common/autotest_common.sh@10 -- # set +x 00:09:54.909 ************************************ 00:09:54.909 START TEST nvme_rpc_timeouts 00:09:54.909 ************************************ 00:09:54.909 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:54.909 * Looking for test storage... 00:09:55.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.167 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.167 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.167 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.167 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.167 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.168 09:22:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.168 --rc genhtml_branch_coverage=1 00:09:55.168 --rc genhtml_function_coverage=1 00:09:55.168 --rc genhtml_legend=1 00:09:55.168 --rc geninfo_all_blocks=1 00:09:55.168 --rc geninfo_unexecuted_blocks=1 00:09:55.168 00:09:55.168 ' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.168 --rc genhtml_branch_coverage=1 00:09:55.168 --rc genhtml_function_coverage=1 00:09:55.168 --rc genhtml_legend=1 00:09:55.168 --rc geninfo_all_blocks=1 00:09:55.168 --rc geninfo_unexecuted_blocks=1 00:09:55.168 00:09:55.168 ' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.168 --rc genhtml_branch_coverage=1 00:09:55.168 --rc genhtml_function_coverage=1 00:09:55.168 --rc genhtml_legend=1 00:09:55.168 --rc geninfo_all_blocks=1 00:09:55.168 --rc geninfo_unexecuted_blocks=1 00:09:55.168 00:09:55.168 ' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.168 --rc genhtml_branch_coverage=1 00:09:55.168 --rc genhtml_function_coverage=1 00:09:55.168 --rc genhtml_legend=1 00:09:55.168 --rc geninfo_all_blocks=1 00:09:55.168 --rc geninfo_unexecuted_blocks=1 00:09:55.168 00:09:55.168 ' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65874 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65874 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65906 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:55.168 09:22:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65906 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65906 ']' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.168 09:22:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 [2024-11-20 09:22:20.570346] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:55.168 [2024-11-20 09:22:20.570531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65906 ] 00:09:55.427 [2024-11-20 09:22:20.743122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.427 [2024-11-20 09:22:20.833135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.427 [2024-11-20 09:22:20.833141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.993 09:22:21 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.993 09:22:21 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:55.993 Checking default timeout settings: 00:09:55.993 09:22:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:55.993 09:22:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:56.590 Making settings changes with rpc: 00:09:56.591 09:22:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:56.591 09:22:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:56.591 Check default vs. modified settings: 00:09:56.591 09:22:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:56.591 09:22:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:56.848 Setting action_on_timeout is changed as expected. 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:56.848 Setting timeout_us is changed as expected. 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:56.848 Setting timeout_admin_us is changed as expected. 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65874 /tmp/settings_modified_65874 00:09:56.848 09:22:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65906 00:09:56.848 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65906 ']' 00:09:56.848 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65906 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65906 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65906' 00:09:56.849 killing process with pid 65906 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65906 00:09:56.849 09:22:22 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65906 00:09:58.217 RPC TIMEOUT SETTING TEST PASSED. 00:09:58.217 09:22:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:58.217 00:09:58.217 real 0m3.301s 00:09:58.217 user 0m6.357s 00:09:58.217 sys 0m0.478s 00:09:58.217 09:22:23 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.217 09:22:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:58.217 ************************************ 00:09:58.217 END TEST nvme_rpc_timeouts 00:09:58.217 ************************************ 00:09:58.217 09:22:23 -- spdk/autotest.sh@239 -- # uname -s 00:09:58.217 09:22:23 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:58.217 09:22:23 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:58.217 09:22:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.217 09:22:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.217 09:22:23 -- common/autotest_common.sh@10 -- # set +x 00:09:58.217 ************************************ 00:09:58.217 START TEST sw_hotplug 00:09:58.217 ************************************ 00:09:58.217 09:22:23 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:58.475 * Looking for test storage... 00:09:58.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.475 09:22:23 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.475 --rc genhtml_branch_coverage=1 00:09:58.475 --rc genhtml_function_coverage=1 00:09:58.475 --rc genhtml_legend=1 00:09:58.475 --rc geninfo_all_blocks=1 00:09:58.475 --rc geninfo_unexecuted_blocks=1 00:09:58.475 00:09:58.475 ' 00:09:58.475 09:22:23 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.476 --rc genhtml_branch_coverage=1 00:09:58.476 --rc genhtml_function_coverage=1 00:09:58.476 --rc genhtml_legend=1 00:09:58.476 --rc geninfo_all_blocks=1 00:09:58.476 --rc geninfo_unexecuted_blocks=1 00:09:58.476 00:09:58.476 ' 00:09:58.476 09:22:23 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.476 --rc genhtml_branch_coverage=1 00:09:58.476 --rc genhtml_function_coverage=1 00:09:58.476 --rc genhtml_legend=1 00:09:58.476 --rc geninfo_all_blocks=1 00:09:58.476 --rc geninfo_unexecuted_blocks=1 00:09:58.476 00:09:58.476 ' 00:09:58.476 09:22:23 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.476 --rc genhtml_branch_coverage=1 00:09:58.476 --rc genhtml_function_coverage=1 00:09:58.476 --rc genhtml_legend=1 00:09:58.476 --rc geninfo_all_blocks=1 00:09:58.476 --rc geninfo_unexecuted_blocks=1 00:09:58.476 00:09:58.476 ' 00:09:58.476 09:22:23 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:58.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.733 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:58.733 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:58.733 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:58.733 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:58.991 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:58.991 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:58.991 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:58.991 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:58.991 09:22:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:58.992 09:22:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:58.992 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:58.992 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:58.992 09:22:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.251 Waiting for block devices as requested 00:09:59.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.509 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.509 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.509 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.810 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:04.810 09:22:29 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:04.810 09:22:29 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.068 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:05.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.068 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:05.326 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:05.584 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.584 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:05.584 09:22:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66756 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:05.584 09:22:30 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:05.584 09:22:30 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:05.584 09:22:30 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:05.584 09:22:30 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:05.584 09:22:30 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:05.584 09:22:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:05.841 Initializing NVMe Controllers 00:10:05.841 Attaching to 0000:00:10.0 00:10:05.841 Attaching to 0000:00:11.0 00:10:05.841 Attached to 0000:00:10.0 00:10:05.841 Attached to 0000:00:11.0 00:10:05.841 Initialization complete. Starting I/O... 00:10:05.841 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:05.841 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:05.841 00:10:06.775 QEMU NVMe Ctrl (12340 ): 2557 I/Os completed (+2557) 00:10:06.775 QEMU NVMe Ctrl (12341 ): 2623 I/Os completed (+2623) 00:10:06.775 00:10:08.148 QEMU NVMe Ctrl (12340 ): 6101 I/Os completed (+3544) 00:10:08.148 QEMU NVMe Ctrl (12341 ): 6297 I/Os completed (+3674) 00:10:08.148 00:10:09.082 QEMU NVMe Ctrl (12340 ): 9594 I/Os completed (+3493) 00:10:09.082 QEMU NVMe Ctrl (12341 ): 9886 I/Os completed (+3589) 00:10:09.082 00:10:10.015 QEMU NVMe Ctrl (12340 ): 13208 I/Os completed (+3614) 00:10:10.015 QEMU NVMe Ctrl (12341 ): 13490 I/Os completed (+3604) 00:10:10.015 00:10:10.953 QEMU NVMe Ctrl (12340 ): 16762 I/Os completed (+3554) 00:10:10.953 QEMU NVMe Ctrl (12341 ): 17051 I/Os completed (+3561) 00:10:10.953 00:10:11.888 09:22:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.888 09:22:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.888 09:22:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.888 [2024-11-20 09:22:36.980596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:11.888 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.888 [2024-11-20 09:22:36.982618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:36.982690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:36.982716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:36.982743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.888 [2024-11-20 09:22:36.985739] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:36.985805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:36.985826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:36.985848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.888 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.888 [2024-11-20 09:22:37.007896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:11.888 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.888 [2024-11-20 09:22:37.009708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:37.009763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:37.009795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.888 [2024-11-20 09:22:37.009821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.889 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.889 [2024-11-20 09:22:37.012518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.889 [2024-11-20 09:22:37.012574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.889 [2024-11-20 09:22:37.012600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.889 [2024-11-20 09:22:37.012621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.889 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.889 EAL: Scan for (pci) bus failed. 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:11.889 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:11.889 Attaching to 0000:00:10.0 00:10:11.889 Attached to 0000:00:10.0 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.889 09:22:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:11.889 Attaching to 0000:00:11.0 00:10:11.889 Attached to 0000:00:11.0 00:10:12.830 QEMU NVMe Ctrl (12340 ): 2910 I/Os completed (+2910) 00:10:12.830 QEMU NVMe Ctrl (12341 ): 2709 I/Os completed (+2709) 00:10:12.830 00:10:13.763 QEMU NVMe Ctrl (12340 ): 6092 I/Os completed (+3182) 00:10:13.763 QEMU NVMe Ctrl (12341 ): 5970 I/Os completed (+3261) 00:10:13.763 00:10:15.135 QEMU NVMe Ctrl (12340 ): 9177 I/Os completed (+3085) 00:10:15.135 QEMU NVMe Ctrl (12341 ): 9132 I/Os completed (+3162) 00:10:15.135 00:10:16.070 QEMU NVMe Ctrl (12340 ): 12601 I/Os completed (+3424) 00:10:16.070 QEMU NVMe Ctrl (12341 ): 12735 I/Os completed (+3603) 00:10:16.070 00:10:17.004 QEMU NVMe Ctrl (12340 ): 15930 I/Os completed (+3329) 00:10:17.004 QEMU NVMe Ctrl (12341 ): 16394 I/Os completed (+3659) 00:10:17.004 00:10:17.937 QEMU NVMe Ctrl (12340 ): 18986 I/Os completed (+3056) 00:10:17.937 QEMU NVMe Ctrl (12341 ): 19669 I/Os completed (+3275) 00:10:17.937 00:10:18.891 QEMU NVMe Ctrl (12340 ): 22231 I/Os completed (+3245) 00:10:18.891 QEMU NVMe Ctrl (12341 ): 23002 I/Os completed (+3333) 00:10:18.891 00:10:19.824 QEMU NVMe Ctrl (12340 ): 25222 I/Os completed (+2991) 00:10:19.824 QEMU NVMe Ctrl (12341 ): 26325 I/Os completed (+3323) 00:10:19.824 00:10:20.756 QEMU NVMe Ctrl (12340 ): 28283 I/Os completed (+3061) 00:10:20.756 QEMU NVMe Ctrl (12341 ): 29463 I/Os completed (+3138) 00:10:20.756 00:10:22.129 QEMU NVMe Ctrl (12340 ): 31672 I/Os completed (+3389) 00:10:22.129 QEMU NVMe Ctrl (12341 ): 32837 I/Os completed (+3374) 00:10:22.129 00:10:23.061 QEMU NVMe Ctrl (12340 ): 34822 I/Os completed (+3150) 00:10:23.061 QEMU NVMe Ctrl (12341 ): 35990 I/Os completed (+3153) 00:10:23.061 00:10:23.994 QEMU NVMe Ctrl (12340 ): 37887 I/Os completed (+3065) 00:10:23.994 QEMU NVMe Ctrl (12341 ): 39106 I/Os completed (+3116) 00:10:23.994 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.994 [2024-11-20 09:22:49.311603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:23.994 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:23.994 [2024-11-20 09:22:49.313630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.313701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.313731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.313758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.994 [2024-11-20 09:22:49.316898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.316963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.316986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.317010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.994 [2024-11-20 09:22:49.340780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:23.994 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:23.994 [2024-11-20 09:22:49.343001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.343089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.343132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.343163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:23.994 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:23.994 [2024-11-20 09:22:49.346699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.346746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.346762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 [2024-11-20 09:22:49.346777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.994 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.251 Attaching to 0000:00:10.0 00:10:24.251 Attached to 0000:00:10.0 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.251 09:22:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.251 Attaching to 0000:00:11.0 00:10:24.251 Attached to 0000:00:11.0 00:10:24.817 QEMU NVMe Ctrl (12340 ): 2185 I/Os completed (+2185) 00:10:24.817 QEMU NVMe Ctrl (12341 ): 1898 I/Os completed (+1898) 00:10:24.817 00:10:25.750 QEMU NVMe Ctrl (12340 ): 5269 I/Os completed (+3084) 00:10:25.750 QEMU NVMe Ctrl (12341 ): 5018 I/Os completed (+3120) 00:10:25.750 00:10:27.123 QEMU NVMe Ctrl (12340 ): 8532 I/Os completed (+3263) 00:10:27.123 QEMU NVMe Ctrl (12341 ): 8227 I/Os completed (+3209) 00:10:27.123 00:10:28.055 QEMU NVMe Ctrl (12340 ): 11682 I/Os completed (+3150) 00:10:28.055 QEMU NVMe Ctrl (12341 ): 11359 I/Os completed (+3132) 00:10:28.055 00:10:28.728 QEMU NVMe Ctrl (12340 ): 14704 I/Os completed (+3022) 00:10:28.728 QEMU NVMe Ctrl (12341 ): 14352 I/Os completed (+2993) 00:10:28.728 00:10:30.102 QEMU NVMe Ctrl (12340 ): 17830 I/Os completed (+3126) 00:10:30.102 QEMU NVMe Ctrl (12341 ): 17585 I/Os completed (+3233) 00:10:30.102 00:10:31.037 QEMU NVMe Ctrl (12340 ): 21007 I/Os completed (+3177) 00:10:31.037 QEMU NVMe Ctrl (12341 ): 20688 I/Os completed (+3103) 00:10:31.037 00:10:31.970 QEMU NVMe Ctrl (12340 ): 24511 I/Os completed (+3504) 00:10:31.970 QEMU NVMe Ctrl (12341 ): 24264 I/Os completed (+3576) 00:10:31.970 00:10:32.904 QEMU NVMe Ctrl (12340 ): 28032 I/Os completed (+3521) 00:10:32.904 QEMU NVMe Ctrl (12341 ): 27813 I/Os completed (+3549) 00:10:32.904 00:10:33.904 QEMU NVMe Ctrl (12340 ): 31053 I/Os completed (+3021) 00:10:33.904 QEMU NVMe Ctrl (12341 ): 30847 I/Os completed (+3034) 00:10:33.904 00:10:34.839 QEMU NVMe Ctrl (12340 ): 34427 I/Os completed (+3374) 00:10:34.839 QEMU NVMe Ctrl (12341 ): 34256 I/Os completed (+3409) 00:10:34.839 00:10:35.773 QEMU NVMe Ctrl (12340 ): 37739 I/Os completed (+3312) 00:10:35.773 QEMU NVMe Ctrl (12341 ): 37585 I/Os completed (+3329) 00:10:35.773 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.339 [2024-11-20 09:23:01.577961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.339 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.339 [2024-11-20 09:23:01.579557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.579616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.579640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.579661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.339 [2024-11-20 09:23:01.582097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.582149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.582167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.582188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.339 [2024-11-20 09:23:01.602016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:36.339 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.339 [2024-11-20 09:23:01.603460] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.603513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.603536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.603556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.339 [2024-11-20 09:23:01.605704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.605751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.605777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 [2024-11-20 09:23:01.605794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.339 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:36.339 EAL: Scan for (pci) bus failed. 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.339 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.339 Attaching to 0000:00:10.0 00:10:36.339 Attached to 0000:00:10.0 00:10:36.597 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.597 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.597 09:23:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.597 Attaching to 0000:00:11.0 00:10:36.597 Attached to 0000:00:11.0 00:10:36.597 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.597 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.597 [2024-11-20 09:23:01.856358] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:48.803 09:23:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.803 09:23:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.803 09:23:13 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.87 00:10:48.803 09:23:13 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.87 00:10:48.803 09:23:13 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:48.803 09:23:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.87 00:10:48.803 09:23:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 2 00:10:48.803 remove_attach_helper took 42.87s to complete (handling 2 nvme drive(s)) 09:23:13 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66756 00:10:55.409 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66756) - No such process 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66756 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67304 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:55.409 09:23:19 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67304 00:10:55.409 09:23:19 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67304 ']' 00:10:55.409 09:23:19 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.409 09:23:19 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.409 09:23:19 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.409 09:23:19 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.409 09:23:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.409 [2024-11-20 09:23:19.937843] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:55.409 [2024-11-20 09:23:19.938428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67304 ] 00:10:55.409 [2024-11-20 09:23:20.092140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.409 [2024-11-20 09:23:20.193597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:55.409 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.409 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:55.409 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:55.409 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:55.409 09:23:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:55.410 09:23:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:55.410 09:23:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:55.410 09:23:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:55.410 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:55.410 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:55.410 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:55.410 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:55.410 09:23:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.047 09:23:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.047 09:23:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.047 09:23:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:02.047 09:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:02.047 [2024-11-20 09:23:26.915146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.047 [2024-11-20 09:23:26.916558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.047 [2024-11-20 09:23:26.916597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.047 [2024-11-20 09:23:26.916611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.047 [2024-11-20 09:23:26.916631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.047 [2024-11-20 09:23:26.916638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.047 [2024-11-20 09:23:26.916648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.047 [2024-11-20 09:23:26.916655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.048 [2024-11-20 09:23:26.916663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.048 [2024-11-20 09:23:26.916670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.048 [2024-11-20 09:23:26.916681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.048 [2024-11-20 09:23:26.916687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.048 [2024-11-20 09:23:26.916695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.048 [2024-11-20 09:23:27.315142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:02.048 [2024-11-20 09:23:27.316609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.048 [2024-11-20 09:23:27.316640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.048 [2024-11-20 09:23:27.316652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.048 [2024-11-20 09:23:27.316668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.048 [2024-11-20 09:23:27.316676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.048 [2024-11-20 09:23:27.316683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.048 [2024-11-20 09:23:27.316692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.048 [2024-11-20 09:23:27.316699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.048 [2024-11-20 09:23:27.316707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.048 [2024-11-20 09:23:27.316714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.048 [2024-11-20 09:23:27.316722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.048 [2024-11-20 09:23:27.316729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.048 09:23:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.048 09:23:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.048 09:23:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:02.048 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.323 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.324 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.324 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.324 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.324 09:23:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.513 09:23:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.513 09:23:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.513 09:23:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:14.513 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.514 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.514 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.514 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.514 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.514 09:23:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 09:23:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 09:23:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:14.514 09:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.514 [2024-11-20 09:23:39.815368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:14.514 [2024-11-20 09:23:39.816773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.514 [2024-11-20 09:23:39.816812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.514 [2024-11-20 09:23:39.816824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.514 [2024-11-20 09:23:39.816845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.514 [2024-11-20 09:23:39.816853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.514 [2024-11-20 09:23:39.816861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.514 [2024-11-20 09:23:39.816869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.514 [2024-11-20 09:23:39.816877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.514 [2024-11-20 09:23:39.816883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.514 [2024-11-20 09:23:39.816892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.514 [2024-11-20 09:23:39.816898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.514 [2024-11-20 09:23:39.816907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.080 09:23:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.080 09:23:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.080 09:23:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:15.080 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.080 [2024-11-20 09:23:40.515369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:15.080 [2024-11-20 09:23:40.516811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.080 [2024-11-20 09:23:40.516860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.080 [2024-11-20 09:23:40.516880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.080 [2024-11-20 09:23:40.516899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.080 [2024-11-20 09:23:40.516908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.080 [2024-11-20 09:23:40.516916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.080 [2024-11-20 09:23:40.516924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.080 [2024-11-20 09:23:40.516931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.080 [2024-11-20 09:23:40.516939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.080 [2024-11-20 09:23:40.516947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.080 [2024-11-20 09:23:40.516954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.080 [2024-11-20 09:23:40.516961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.649 09:23:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.649 09:23:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.649 09:23:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.649 09:23:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.649 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.649 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.649 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.649 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.649 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.906 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.906 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.906 09:23:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.175 09:23:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.175 09:23:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.175 09:23:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.175 [2024-11-20 09:23:53.215571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:28.175 [2024-11-20 09:23:53.217011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.175 [2024-11-20 09:23:53.217047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.175 [2024-11-20 09:23:53.217060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.175 [2024-11-20 09:23:53.217078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.175 [2024-11-20 09:23:53.217085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.175 [2024-11-20 09:23:53.217095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.175 [2024-11-20 09:23:53.217103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.175 [2024-11-20 09:23:53.217111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.175 [2024-11-20 09:23:53.217118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.175 [2024-11-20 09:23:53.217126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.175 [2024-11-20 09:23:53.217133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.175 [2024-11-20 09:23:53.217140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.175 09:23:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.175 09:23:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.175 09:23:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:28.175 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:28.441 [2024-11-20 09:23:53.715588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:28.441 [2024-11-20 09:23:53.716951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.441 [2024-11-20 09:23:53.716986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.441 [2024-11-20 09:23:53.716999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.441 [2024-11-20 09:23:53.717015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.441 [2024-11-20 09:23:53.717024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.441 [2024-11-20 09:23:53.717032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.441 [2024-11-20 09:23:53.717041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.441 [2024-11-20 09:23:53.717047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.441 [2024-11-20 09:23:53.717057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.441 [2024-11-20 09:23:53.717064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.441 [2024-11-20 09:23:53.717071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.441 [2024-11-20 09:23:53.717078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.441 09:23:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.441 09:23:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.441 09:23:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.441 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:28.703 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:28.703 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.703 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.703 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.703 09:23:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:28.703 09:23:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:28.703 09:23:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.703 09:23:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:40.961 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:40.961 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:40.961 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:40.961 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.29 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.29 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.29 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.29 2 00:11:40.962 remove_attach_helper took 45.29s to complete (handling 2 nvme drive(s)) 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:40.962 09:24:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:40.962 09:24:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.562 09:24:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.562 09:24:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.562 09:24:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.562 [2024-11-20 09:24:12.231569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:47.562 [2024-11-20 09:24:12.232650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.232683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.232695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 [2024-11-20 09:24:12.232717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.232725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.232734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 [2024-11-20 09:24:12.232742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.232750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.232757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 [2024-11-20 09:24:12.232766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.232773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.232783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.562 09:24:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.562 09:24:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.562 [2024-11-20 09:24:12.731575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:47.562 [2024-11-20 09:24:12.732906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.732936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.732948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 [2024-11-20 09:24:12.732964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.732973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.732980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 [2024-11-20 09:24:12.732990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.732997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.733005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 [2024-11-20 09:24:12.733013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.562 [2024-11-20 09:24:12.733021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.562 [2024-11-20 09:24:12.733028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.562 09:24:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:47.562 09:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.822 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:47.822 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.822 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.822 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.822 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.822 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.822 09:24:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.822 09:24:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.822 09:24:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.081 09:24:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.342 09:24:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.342 09:24:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.342 09:24:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.342 09:24:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.342 09:24:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.342 09:24:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.342 09:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.342 [2024-11-20 09:24:25.631824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.342 [2024-11-20 09:24:25.633256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.342 [2024-11-20 09:24:25.633294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.342 [2024-11-20 09:24:25.633315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.342 [2024-11-20 09:24:25.633334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.342 [2024-11-20 09:24:25.633342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.342 [2024-11-20 09:24:25.633351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.342 [2024-11-20 09:24:25.633359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.342 [2024-11-20 09:24:25.633367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.342 [2024-11-20 09:24:25.633374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.342 [2024-11-20 09:24:25.633383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.342 [2024-11-20 09:24:25.633389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.342 [2024-11-20 09:24:25.633397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.910 09:24:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.910 09:24:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.910 [2024-11-20 09:24:26.131820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.910 [2024-11-20 09:24:26.132840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.910 [2024-11-20 09:24:26.132866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.910 [2024-11-20 09:24:26.132878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.910 [2024-11-20 09:24:26.132893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.910 [2024-11-20 09:24:26.132905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.910 [2024-11-20 09:24:26.132912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.910 [2024-11-20 09:24:26.132923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.910 [2024-11-20 09:24:26.132931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.910 [2024-11-20 09:24:26.132939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.910 [2024-11-20 09:24:26.132947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.910 [2024-11-20 09:24:26.132955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.910 [2024-11-20 09:24:26.132961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.910 09:24:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:00.910 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.497 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.497 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.497 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.497 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.497 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.497 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.498 09:24:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.498 09:24:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.498 09:24:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.498 09:24:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.733 09:24:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.733 09:24:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.733 09:24:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.733 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.734 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.734 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.734 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.734 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.734 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.734 09:24:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.734 09:24:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.734 09:24:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.734 09:24:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.734 [2024-11-20 09:24:39.032041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:13.734 [2024-11-20 09:24:39.033136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.734 [2024-11-20 09:24:39.033174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.734 [2024-11-20 09:24:39.033186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.734 [2024-11-20 09:24:39.033204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.734 [2024-11-20 09:24:39.033212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.734 [2024-11-20 09:24:39.033220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.734 [2024-11-20 09:24:39.033228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.734 [2024-11-20 09:24:39.033239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.734 [2024-11-20 09:24:39.033246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.734 [2024-11-20 09:24:39.033255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.734 [2024-11-20 09:24:39.033262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.734 [2024-11-20 09:24:39.033270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.734 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:13.734 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.305 09:24:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.305 09:24:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.305 09:24:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:14.305 09:24:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.305 [2024-11-20 09:24:39.632053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:14.305 [2024-11-20 09:24:39.633076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.305 [2024-11-20 09:24:39.633111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.305 [2024-11-20 09:24:39.633123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.305 [2024-11-20 09:24:39.633138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.305 [2024-11-20 09:24:39.633147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.305 [2024-11-20 09:24:39.633154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.305 [2024-11-20 09:24:39.633163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.305 [2024-11-20 09:24:39.633170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.305 [2024-11-20 09:24:39.633178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.305 [2024-11-20 09:24:39.633185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.305 [2024-11-20 09:24:39.633195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.305 [2024-11-20 09:24:39.633202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.877 09:24:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.877 09:24:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.877 09:24:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.877 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.141 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.141 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.141 09:24:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.26 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.26 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.26 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.26 2 00:12:27.464 remove_attach_helper took 46.26s to complete (handling 2 nvme drive(s)) 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:27.464 09:24:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67304 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67304 ']' 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67304 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.464 09:24:52 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67304 00:12:27.465 09:24:52 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.465 09:24:52 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.465 killing process with pid 67304 00:12:27.465 09:24:52 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67304' 00:12:27.465 09:24:52 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67304 00:12:27.465 09:24:52 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67304 00:12:28.403 09:24:53 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:28.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.922 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.922 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:29.233 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:29.233 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:29.233 ************************************ 00:12:29.233 END TEST sw_hotplug 00:12:29.233 ************************************ 00:12:29.233 00:12:29.233 real 2m30.884s 00:12:29.233 user 1m53.109s 00:12:29.233 sys 0m16.679s 00:12:29.233 09:24:54 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.233 09:24:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:29.233 09:24:54 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:29.233 09:24:54 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:29.233 09:24:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:29.233 09:24:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.233 09:24:54 -- common/autotest_common.sh@10 -- # set +x 00:12:29.233 ************************************ 00:12:29.234 START TEST nvme_xnvme 00:12:29.234 ************************************ 00:12:29.234 09:24:54 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:29.234 * Looking for test storage... 00:12:29.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.234 09:24:54 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:29.234 09:24:54 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.234 09:24:54 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.495 --rc genhtml_branch_coverage=1 00:12:29.495 --rc genhtml_function_coverage=1 00:12:29.495 --rc genhtml_legend=1 00:12:29.495 --rc geninfo_all_blocks=1 00:12:29.495 --rc geninfo_unexecuted_blocks=1 00:12:29.495 00:12:29.495 ' 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.495 --rc genhtml_branch_coverage=1 00:12:29.495 --rc genhtml_function_coverage=1 00:12:29.495 --rc genhtml_legend=1 00:12:29.495 --rc geninfo_all_blocks=1 00:12:29.495 --rc geninfo_unexecuted_blocks=1 00:12:29.495 00:12:29.495 ' 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.495 --rc genhtml_branch_coverage=1 00:12:29.495 --rc genhtml_function_coverage=1 00:12:29.495 --rc genhtml_legend=1 00:12:29.495 --rc geninfo_all_blocks=1 00:12:29.495 --rc geninfo_unexecuted_blocks=1 00:12:29.495 00:12:29.495 ' 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.495 --rc genhtml_branch_coverage=1 00:12:29.495 --rc genhtml_function_coverage=1 00:12:29.495 --rc genhtml_legend=1 00:12:29.495 --rc geninfo_all_blocks=1 00:12:29.495 --rc geninfo_unexecuted_blocks=1 00:12:29.495 00:12:29.495 ' 00:12:29.495 09:24:54 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.495 09:24:54 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.495 09:24:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.495 09:24:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.495 09:24:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.495 09:24:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:29.495 09:24:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.495 09:24:54 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.495 09:24:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.495 ************************************ 00:12:29.495 START TEST xnvme_to_malloc_dd_copy 00:12:29.495 ************************************ 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:29.495 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:29.496 09:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.496 { 00:12:29.496 "subsystems": [ 00:12:29.496 { 00:12:29.496 "subsystem": "bdev", 00:12:29.496 "config": [ 00:12:29.496 { 00:12:29.496 "params": { 00:12:29.496 "block_size": 512, 00:12:29.496 "num_blocks": 2097152, 00:12:29.496 "name": "malloc0" 00:12:29.496 }, 00:12:29.496 "method": "bdev_malloc_create" 00:12:29.496 }, 00:12:29.496 { 00:12:29.496 "params": { 00:12:29.496 "io_mechanism": "libaio", 00:12:29.496 "filename": "/dev/nullb0", 00:12:29.496 "name": "null0" 00:12:29.496 }, 00:12:29.496 "method": "bdev_xnvme_create" 00:12:29.496 }, 00:12:29.496 { 00:12:29.496 "method": "bdev_wait_for_examine" 00:12:29.496 } 00:12:29.496 ] 00:12:29.496 } 00:12:29.496 ] 00:12:29.496 } 00:12:29.496 [2024-11-20 09:24:54.807435] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:29.496 [2024-11-20 09:24:54.807566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68706 ] 00:12:29.756 [2024-11-20 09:24:54.966321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.757 [2024-11-20 09:24:55.073834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.670  [2024-11-20T09:24:58.065Z] Copying: 230/1024 [MB] (230 MBps) [2024-11-20T09:24:59.461Z] Copying: 462/1024 [MB] (231 MBps) [2024-11-20T09:25:00.395Z] Copying: 689/1024 [MB] (227 MBps) [2024-11-20T09:25:00.652Z] Copying: 920/1024 [MB] (231 MBps) [2024-11-20T09:25:03.216Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:12:37.760 00:12:37.760 09:25:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:37.760 09:25:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:37.760 09:25:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:37.760 09:25:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.760 { 00:12:37.760 "subsystems": [ 00:12:37.760 { 00:12:37.760 "subsystem": "bdev", 00:12:37.760 "config": [ 00:12:37.760 { 00:12:37.760 "params": { 00:12:37.760 "block_size": 512, 00:12:37.760 "num_blocks": 2097152, 00:12:37.760 "name": "malloc0" 00:12:37.760 }, 00:12:37.760 "method": "bdev_malloc_create" 00:12:37.760 }, 00:12:37.760 { 00:12:37.760 "params": { 00:12:37.760 "io_mechanism": "libaio", 00:12:37.760 "filename": "/dev/nullb0", 00:12:37.760 "name": "null0" 00:12:37.760 }, 00:12:37.760 "method": "bdev_xnvme_create" 00:12:37.760 }, 00:12:37.760 { 00:12:37.760 "method": "bdev_wait_for_examine" 00:12:37.760 } 00:12:37.760 ] 00:12:37.760 } 00:12:37.760 ] 00:12:37.760 } 00:12:37.760 [2024-11-20 09:25:02.785859] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:37.760 [2024-11-20 09:25:02.786036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68804 ] 00:12:37.760 [2024-11-20 09:25:02.956855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.760 [2024-11-20 09:25:03.059992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.671  [2024-11-20T09:25:06.067Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-20T09:25:07.447Z] Copying: 460/1024 [MB] (235 MBps) [2024-11-20T09:25:08.384Z] Copying: 695/1024 [MB] (235 MBps) [2024-11-20T09:25:08.384Z] Copying: 977/1024 [MB] (282 MBps) [2024-11-20T09:25:10.294Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:12:44.838 00:12:44.838 09:25:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:44.838 09:25:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:44.838 09:25:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:44.838 09:25:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:44.838 09:25:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:44.838 09:25:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.838 { 00:12:44.838 "subsystems": [ 00:12:44.838 { 00:12:44.838 "subsystem": "bdev", 00:12:44.838 "config": [ 00:12:44.838 { 00:12:44.838 "params": { 00:12:44.838 "block_size": 512, 00:12:44.838 "num_blocks": 2097152, 00:12:44.838 "name": "malloc0" 00:12:44.838 }, 00:12:44.838 "method": "bdev_malloc_create" 00:12:44.838 }, 00:12:44.838 { 00:12:44.838 "params": { 00:12:44.838 "io_mechanism": "io_uring", 00:12:44.838 "filename": "/dev/nullb0", 00:12:44.838 "name": "null0" 00:12:44.838 }, 00:12:44.838 "method": "bdev_xnvme_create" 00:12:44.838 }, 00:12:44.838 { 00:12:44.838 "method": "bdev_wait_for_examine" 00:12:44.838 } 00:12:44.838 ] 00:12:44.838 } 00:12:44.838 ] 00:12:44.838 } 00:12:45.099 [2024-11-20 09:25:10.295172] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:45.099 [2024-11-20 09:25:10.295314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68891 ] 00:12:45.099 [2024-11-20 09:25:10.444572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.099 [2024-11-20 09:25:10.536112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.012  [2024-11-20T09:25:13.412Z] Copying: 296/1024 [MB] (296 MBps) [2024-11-20T09:25:14.354Z] Copying: 593/1024 [MB] (296 MBps) [2024-11-20T09:25:14.925Z] Copying: 892/1024 [MB] (298 MBps) [2024-11-20T09:25:16.840Z] Copying: 1024/1024 [MB] (average 297 MBps) 00:12:51.384 00:12:51.384 09:25:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:51.384 09:25:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:51.384 09:25:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:51.384 09:25:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:51.384 { 00:12:51.384 "subsystems": [ 00:12:51.384 { 00:12:51.384 "subsystem": "bdev", 00:12:51.384 "config": [ 00:12:51.384 { 00:12:51.384 "params": { 00:12:51.384 "block_size": 512, 00:12:51.384 "num_blocks": 2097152, 00:12:51.384 "name": "malloc0" 00:12:51.384 }, 00:12:51.384 "method": "bdev_malloc_create" 00:12:51.384 }, 00:12:51.384 { 00:12:51.384 "params": { 00:12:51.384 "io_mechanism": "io_uring", 00:12:51.384 "filename": "/dev/nullb0", 00:12:51.384 "name": "null0" 00:12:51.384 }, 00:12:51.384 "method": "bdev_xnvme_create" 00:12:51.384 }, 00:12:51.384 { 00:12:51.384 "method": "bdev_wait_for_examine" 00:12:51.384 } 00:12:51.384 ] 00:12:51.384 } 00:12:51.384 ] 00:12:51.384 } 00:12:51.645 [2024-11-20 09:25:16.846421] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:51.646 [2024-11-20 09:25:16.846599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68973 ] 00:12:51.646 [2024-11-20 09:25:17.021385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.907 [2024-11-20 09:25:17.124182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.900  [2024-11-20T09:25:20.287Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-20T09:25:21.219Z] Copying: 482/1024 [MB] (242 MBps) [2024-11-20T09:25:22.153Z] Copying: 743/1024 [MB] (260 MBps) [2024-11-20T09:25:24.052Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:12:58.596 00:12:58.596 09:25:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:58.596 09:25:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:58.854 00:12:58.854 real 0m29.327s 00:12:58.854 user 0m25.830s 00:12:58.854 sys 0m2.925s 00:12:58.854 09:25:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.854 09:25:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:58.854 ************************************ 00:12:58.854 END TEST xnvme_to_malloc_dd_copy 00:12:58.854 ************************************ 00:12:58.854 09:25:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:58.854 09:25:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.854 09:25:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.854 09:25:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.854 ************************************ 00:12:58.854 START TEST xnvme_bdevperf 00:12:58.854 ************************************ 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:58.854 09:25:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.854 { 00:12:58.854 "subsystems": [ 00:12:58.854 { 00:12:58.854 "subsystem": "bdev", 00:12:58.854 "config": [ 00:12:58.854 { 00:12:58.854 "params": { 00:12:58.854 "io_mechanism": "libaio", 00:12:58.854 "filename": "/dev/nullb0", 00:12:58.854 "name": "null0" 00:12:58.854 }, 00:12:58.854 "method": "bdev_xnvme_create" 00:12:58.854 }, 00:12:58.854 { 00:12:58.854 "method": "bdev_wait_for_examine" 00:12:58.854 } 00:12:58.854 ] 00:12:58.854 } 00:12:58.854 ] 00:12:58.854 } 00:12:58.854 [2024-11-20 09:25:24.170950] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:58.854 [2024-11-20 09:25:24.171064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69082 ] 00:12:59.111 [2024-11-20 09:25:24.328676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.111 [2024-11-20 09:25:24.413395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.370 Running I/O for 5 seconds... 00:13:01.266 193152.00 IOPS, 754.50 MiB/s [2024-11-20T09:25:27.656Z] 193184.00 IOPS, 754.62 MiB/s [2024-11-20T09:25:28.640Z] 192341.33 IOPS, 751.33 MiB/s [2024-11-20T09:25:30.011Z] 193056.00 IOPS, 754.12 MiB/s 00:13:04.555 Latency(us) 00:13:04.555 [2024-11-20T09:25:30.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.555 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:04.555 null0 : 5.00 193491.09 755.82 0.00 0.00 328.41 114.22 1638.40 00:13:04.555 [2024-11-20T09:25:30.011Z] =================================================================================================================== 00:13:04.555 [2024-11-20T09:25:30.011Z] Total : 193491.09 755.82 0.00 0.00 328.41 114.22 1638.40 00:13:04.812 09:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:04.812 09:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:04.812 09:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:04.813 09:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:04.813 09:25:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.813 09:25:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.813 { 00:13:04.813 "subsystems": [ 00:13:04.813 { 00:13:04.813 "subsystem": "bdev", 00:13:04.813 "config": [ 00:13:04.813 { 00:13:04.813 "params": { 00:13:04.813 "io_mechanism": "io_uring", 00:13:04.813 "filename": "/dev/nullb0", 00:13:04.813 "name": "null0" 00:13:04.813 }, 00:13:04.813 "method": "bdev_xnvme_create" 00:13:04.813 }, 00:13:04.813 { 00:13:04.813 "method": "bdev_wait_for_examine" 00:13:04.813 } 00:13:04.813 ] 00:13:04.813 } 00:13:04.813 ] 00:13:04.813 } 00:13:05.070 [2024-11-20 09:25:30.287653] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:05.070 [2024-11-20 09:25:30.287775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69152 ] 00:13:05.070 [2024-11-20 09:25:30.451077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.331 [2024-11-20 09:25:30.537195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.331 Running I/O for 5 seconds... 00:13:07.649 222272.00 IOPS, 868.25 MiB/s [2024-11-20T09:25:34.044Z] 220544.00 IOPS, 861.50 MiB/s [2024-11-20T09:25:34.978Z] 220074.67 IOPS, 859.67 MiB/s [2024-11-20T09:25:35.910Z] 220144.00 IOPS, 859.94 MiB/s 00:13:10.454 Latency(us) 00:13:10.454 [2024-11-20T09:25:35.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.455 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.455 null0 : 5.00 220256.59 860.38 0.00 0.00 288.19 155.96 1613.19 00:13:10.455 [2024-11-20T09:25:35.911Z] =================================================================================================================== 00:13:10.455 [2024-11-20T09:25:35.911Z] Total : 220256.59 860.38 0.00 0.00 288.19 155.96 1613.19 00:13:11.020 09:25:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:11.020 09:25:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:11.020 00:13:11.020 real 0m12.276s 00:13:11.020 user 0m9.839s 00:13:11.020 sys 0m2.181s 00:13:11.020 09:25:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.020 09:25:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.020 ************************************ 00:13:11.020 END TEST xnvme_bdevperf 00:13:11.020 ************************************ 00:13:11.020 00:13:11.020 real 0m41.829s 00:13:11.020 user 0m35.780s 00:13:11.020 sys 0m5.223s 00:13:11.020 09:25:36 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.020 09:25:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.020 ************************************ 00:13:11.020 END TEST nvme_xnvme 00:13:11.020 ************************************ 00:13:11.020 09:25:36 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:11.020 09:25:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.020 09:25:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.020 09:25:36 -- common/autotest_common.sh@10 -- # set +x 00:13:11.020 ************************************ 00:13:11.020 START TEST blockdev_xnvme 00:13:11.020 ************************************ 00:13:11.020 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:11.279 * Looking for test storage... 00:13:11.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.279 09:25:36 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:11.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.279 --rc genhtml_branch_coverage=1 00:13:11.279 --rc genhtml_function_coverage=1 00:13:11.279 --rc genhtml_legend=1 00:13:11.279 --rc geninfo_all_blocks=1 00:13:11.279 --rc geninfo_unexecuted_blocks=1 00:13:11.279 00:13:11.279 ' 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:11.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.279 --rc genhtml_branch_coverage=1 00:13:11.279 --rc genhtml_function_coverage=1 00:13:11.279 --rc genhtml_legend=1 00:13:11.279 --rc geninfo_all_blocks=1 00:13:11.279 --rc geninfo_unexecuted_blocks=1 00:13:11.279 00:13:11.279 ' 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:11.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.279 --rc genhtml_branch_coverage=1 00:13:11.279 --rc genhtml_function_coverage=1 00:13:11.279 --rc genhtml_legend=1 00:13:11.279 --rc geninfo_all_blocks=1 00:13:11.279 --rc geninfo_unexecuted_blocks=1 00:13:11.279 00:13:11.279 ' 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:11.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.279 --rc genhtml_branch_coverage=1 00:13:11.279 --rc genhtml_function_coverage=1 00:13:11.279 --rc genhtml_legend=1 00:13:11.279 --rc geninfo_all_blocks=1 00:13:11.279 --rc geninfo_unexecuted_blocks=1 00:13:11.279 00:13:11.279 ' 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69296 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69296 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69296 ']' 00:13:11.279 09:25:36 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:11.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.279 09:25:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.279 [2024-11-20 09:25:36.683149] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:11.279 [2024-11-20 09:25:36.683316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69296 ] 00:13:11.537 [2024-11-20 09:25:36.847399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.537 [2024-11-20 09:25:36.936663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.103 09:25:37 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.103 09:25:37 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:13:12.103 09:25:37 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:12.103 09:25:37 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:12.103 09:25:37 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:12.103 09:25:37 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:12.103 09:25:37 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:12.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:12.619 Waiting for block devices as requested 00:13:12.619 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.619 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.619 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.876 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.140 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:18.140 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:18.140 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:18.141 nvme0n1 00:13:18.141 nvme1n1 00:13:18.141 nvme2n1 00:13:18.141 nvme2n2 00:13:18.141 nvme2n3 00:13:18.141 nvme3n1 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.141 09:25:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:18.141 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:18.142 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "000b5e90-6fff-4a15-8a9c-34a44ae4c779"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "000b5e90-6fff-4a15-8a9c-34a44ae4c779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c0994509-dbd4-46e1-b9e7-d4499ab3b524"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c0994509-dbd4-46e1-b9e7-d4499ab3b524",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "257286cd-5084-4385-837c-17982b483def"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "257286cd-5084-4385-837c-17982b483def",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d598fec6-8a60-46d0-8704-98a02d993351"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d598fec6-8a60-46d0-8704-98a02d993351",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "01f3a77c-b125-4c64-8405-8714eed34a5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "01f3a77c-b125-4c64-8405-8714eed34a5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d8fd4ca3-edbf-4d92-a323-6fe009734743"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8fd4ca3-edbf-4d92-a323-6fe009734743",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:18.142 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:18.142 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:18.142 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:18.142 09:25:43 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69296 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69296 ']' 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69296 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69296 00:13:18.142 killing process with pid 69296 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69296' 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69296 00:13:18.142 09:25:43 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69296 00:13:19.522 09:25:44 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:19.522 09:25:44 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:19.522 09:25:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:19.522 09:25:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.522 09:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.522 ************************************ 00:13:19.522 START TEST bdev_hello_world 00:13:19.522 ************************************ 00:13:19.522 09:25:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:19.522 [2024-11-20 09:25:44.939046] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:19.522 [2024-11-20 09:25:44.939202] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69658 ] 00:13:19.781 [2024-11-20 09:25:45.093489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.781 [2024-11-20 09:25:45.195315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.346 [2024-11-20 09:25:45.528535] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:20.346 [2024-11-20 09:25:45.528593] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:20.346 [2024-11-20 09:25:45.528613] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:20.346 [2024-11-20 09:25:45.530608] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:20.346 [2024-11-20 09:25:45.530904] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:20.346 [2024-11-20 09:25:45.530926] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:20.346 [2024-11-20 09:25:45.531088] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:20.346 00:13:20.346 [2024-11-20 09:25:45.531107] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:20.912 00:13:20.912 real 0m1.352s 00:13:20.912 user 0m1.062s 00:13:20.912 sys 0m0.176s 00:13:20.912 09:25:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.912 09:25:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 ************************************ 00:13:20.912 END TEST bdev_hello_world 00:13:20.912 ************************************ 00:13:20.912 09:25:46 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:20.912 09:25:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.912 09:25:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.912 09:25:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 ************************************ 00:13:20.912 START TEST bdev_bounds 00:13:20.912 ************************************ 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69696 00:13:20.912 Process bdevio pid: 69696 00:13:20.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69696' 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69696 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69696 ']' 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:20.912 09:25:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 [2024-11-20 09:25:46.339296] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:20.912 [2024-11-20 09:25:46.339448] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69696 ] 00:13:21.170 [2024-11-20 09:25:46.498283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.170 [2024-11-20 09:25:46.604219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.170 [2024-11-20 09:25:46.604283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.170 [2024-11-20 09:25:46.604293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.105 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.105 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:22.105 09:25:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:22.105 I/O targets: 00:13:22.105 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:22.105 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:22.105 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:22.105 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:22.105 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:22.105 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:22.105 00:13:22.105 00:13:22.105 CUnit - A unit testing framework for C - Version 2.1-3 00:13:22.105 http://cunit.sourceforge.net/ 00:13:22.105 00:13:22.105 00:13:22.105 Suite: bdevio tests on: nvme3n1 00:13:22.105 Test: blockdev write read block ...passed 00:13:22.105 Test: blockdev write zeroes read block ...passed 00:13:22.105 Test: blockdev write zeroes read no split ...passed 00:13:22.105 Test: blockdev write zeroes read split ...passed 00:13:22.105 Test: blockdev write zeroes read split partial ...passed 00:13:22.105 Test: blockdev reset ...passed 00:13:22.105 Test: blockdev write read 8 blocks ...passed 00:13:22.105 Test: blockdev write read size > 128k ...passed 00:13:22.105 Test: blockdev write read invalid size ...passed 00:13:22.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.105 Test: blockdev write read max offset ...passed 00:13:22.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.105 Test: blockdev writev readv 8 blocks ...passed 00:13:22.105 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.105 Test: blockdev writev readv block ...passed 00:13:22.105 Test: blockdev writev readv size > 128k ...passed 00:13:22.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.105 Test: blockdev comparev and writev ...passed 00:13:22.105 Test: blockdev nvme passthru rw ...passed 00:13:22.105 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.105 Test: blockdev nvme admin passthru ...passed 00:13:22.105 Test: blockdev copy ...passed 00:13:22.105 Suite: bdevio tests on: nvme2n3 00:13:22.105 Test: blockdev write read block ...passed 00:13:22.105 Test: blockdev write zeroes read block ...passed 00:13:22.105 Test: blockdev write zeroes read no split ...passed 00:13:22.105 Test: blockdev write zeroes read split ...passed 00:13:22.105 Test: blockdev write zeroes read split partial ...passed 00:13:22.105 Test: blockdev reset ...passed 00:13:22.105 Test: blockdev write read 8 blocks ...passed 00:13:22.105 Test: blockdev write read size > 128k ...passed 00:13:22.105 Test: blockdev write read invalid size ...passed 00:13:22.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.105 Test: blockdev write read max offset ...passed 00:13:22.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.105 Test: blockdev writev readv 8 blocks ...passed 00:13:22.105 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.105 Test: blockdev writev readv block ...passed 00:13:22.105 Test: blockdev writev readv size > 128k ...passed 00:13:22.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.105 Test: blockdev comparev and writev ...passed 00:13:22.105 Test: blockdev nvme passthru rw ...passed 00:13:22.105 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.105 Test: blockdev nvme admin passthru ...passed 00:13:22.105 Test: blockdev copy ...passed 00:13:22.105 Suite: bdevio tests on: nvme2n2 00:13:22.105 Test: blockdev write read block ...passed 00:13:22.105 Test: blockdev write zeroes read block ...passed 00:13:22.105 Test: blockdev write zeroes read no split ...passed 00:13:22.105 Test: blockdev write zeroes read split ...passed 00:13:22.105 Test: blockdev write zeroes read split partial ...passed 00:13:22.105 Test: blockdev reset ...passed 00:13:22.105 Test: blockdev write read 8 blocks ...passed 00:13:22.105 Test: blockdev write read size > 128k ...passed 00:13:22.105 Test: blockdev write read invalid size ...passed 00:13:22.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.105 Test: blockdev write read max offset ...passed 00:13:22.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.105 Test: blockdev writev readv 8 blocks ...passed 00:13:22.105 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.105 Test: blockdev writev readv block ...passed 00:13:22.105 Test: blockdev writev readv size > 128k ...passed 00:13:22.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.105 Test: blockdev comparev and writev ...passed 00:13:22.105 Test: blockdev nvme passthru rw ...passed 00:13:22.105 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.105 Test: blockdev nvme admin passthru ...passed 00:13:22.105 Test: blockdev copy ...passed 00:13:22.105 Suite: bdevio tests on: nvme2n1 00:13:22.105 Test: blockdev write read block ...passed 00:13:22.105 Test: blockdev write zeroes read block ...passed 00:13:22.105 Test: blockdev write zeroes read no split ...passed 00:13:22.105 Test: blockdev write zeroes read split ...passed 00:13:22.106 Test: blockdev write zeroes read split partial ...passed 00:13:22.106 Test: blockdev reset ...passed 00:13:22.106 Test: blockdev write read 8 blocks ...passed 00:13:22.106 Test: blockdev write read size > 128k ...passed 00:13:22.106 Test: blockdev write read invalid size ...passed 00:13:22.106 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.106 Test: blockdev write read max offset ...passed 00:13:22.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.106 Test: blockdev writev readv 8 blocks ...passed 00:13:22.106 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.106 Test: blockdev writev readv block ...passed 00:13:22.106 Test: blockdev writev readv size > 128k ...passed 00:13:22.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.106 Test: blockdev comparev and writev ...passed 00:13:22.106 Test: blockdev nvme passthru rw ...passed 00:13:22.106 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.106 Test: blockdev nvme admin passthru ...passed 00:13:22.106 Test: blockdev copy ...passed 00:13:22.106 Suite: bdevio tests on: nvme1n1 00:13:22.106 Test: blockdev write read block ...passed 00:13:22.106 Test: blockdev write zeroes read block ...passed 00:13:22.106 Test: blockdev write zeroes read no split ...passed 00:13:22.106 Test: blockdev write zeroes read split ...passed 00:13:22.364 Test: blockdev write zeroes read split partial ...passed 00:13:22.364 Test: blockdev reset ...passed 00:13:22.364 Test: blockdev write read 8 blocks ...passed 00:13:22.364 Test: blockdev write read size > 128k ...passed 00:13:22.364 Test: blockdev write read invalid size ...passed 00:13:22.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.364 Test: blockdev write read max offset ...passed 00:13:22.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.364 Test: blockdev writev readv 8 blocks ...passed 00:13:22.364 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.364 Test: blockdev writev readv block ...passed 00:13:22.364 Test: blockdev writev readv size > 128k ...passed 00:13:22.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.364 Test: blockdev comparev and writev ...passed 00:13:22.364 Test: blockdev nvme passthru rw ...passed 00:13:22.364 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.364 Test: blockdev nvme admin passthru ...passed 00:13:22.364 Test: blockdev copy ...passed 00:13:22.364 Suite: bdevio tests on: nvme0n1 00:13:22.364 Test: blockdev write read block ...passed 00:13:22.364 Test: blockdev write zeroes read block ...passed 00:13:22.364 Test: blockdev write zeroes read no split ...passed 00:13:22.364 Test: blockdev write zeroes read split ...passed 00:13:22.364 Test: blockdev write zeroes read split partial ...passed 00:13:22.364 Test: blockdev reset ...passed 00:13:22.364 Test: blockdev write read 8 blocks ...passed 00:13:22.364 Test: blockdev write read size > 128k ...passed 00:13:22.364 Test: blockdev write read invalid size ...passed 00:13:22.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.364 Test: blockdev write read max offset ...passed 00:13:22.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.364 Test: blockdev writev readv 8 blocks ...passed 00:13:22.364 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.364 Test: blockdev writev readv block ...passed 00:13:22.364 Test: blockdev writev readv size > 128k ...passed 00:13:22.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.364 Test: blockdev comparev and writev ...passed 00:13:22.364 Test: blockdev nvme passthru rw ...passed 00:13:22.364 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.364 Test: blockdev nvme admin passthru ...passed 00:13:22.364 Test: blockdev copy ...passed 00:13:22.364 00:13:22.364 Run Summary: Type Total Ran Passed Failed Inactive 00:13:22.364 suites 6 6 n/a 0 0 00:13:22.364 tests 138 138 138 0 0 00:13:22.364 asserts 780 780 780 0 n/a 00:13:22.364 00:13:22.364 Elapsed time = 0.941 seconds 00:13:22.364 0 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69696 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69696 ']' 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69696 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69696 00:13:22.364 killing process with pid 69696 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69696' 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69696 00:13:22.364 09:25:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69696 00:13:23.386 09:25:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:23.386 00:13:23.386 real 0m2.172s 00:13:23.386 user 0m5.432s 00:13:23.386 sys 0m0.292s 00:13:23.386 09:25:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.386 09:25:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:23.386 ************************************ 00:13:23.386 END TEST bdev_bounds 00:13:23.386 ************************************ 00:13:23.386 09:25:48 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:23.386 09:25:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:23.386 09:25:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.386 09:25:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.386 ************************************ 00:13:23.386 START TEST bdev_nbd 00:13:23.386 ************************************ 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:23.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69752 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69752 /var/tmp/spdk-nbd.sock 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69752 ']' 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:23.386 09:25:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:23.386 [2024-11-20 09:25:48.561650] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:23.386 [2024-11-20 09:25:48.561805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.386 [2024-11-20 09:25:48.724943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.386 [2024-11-20 09:25:48.828400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.318 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.319 1+0 records in 00:13:24.319 1+0 records out 00:13:24.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052744 s, 7.8 MB/s 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.319 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:24.576 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:24.576 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.577 1+0 records in 00:13:24.577 1+0 records out 00:13:24.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499529 s, 8.2 MB/s 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.577 09:25:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.835 1+0 records in 00:13:24.835 1+0 records out 00:13:24.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435682 s, 9.4 MB/s 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.835 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.093 1+0 records in 00:13:25.093 1+0 records out 00:13:25.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684021 s, 6.0 MB/s 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.093 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.094 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.094 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.352 1+0 records in 00:13:25.352 1+0 records out 00:13:25.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486792 s, 8.4 MB/s 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.352 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.609 1+0 records in 00:13:25.609 1+0 records out 00:13:25.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053488 s, 7.7 MB/s 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.609 09:25:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd0", 00:13:25.868 "bdev_name": "nvme0n1" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd1", 00:13:25.868 "bdev_name": "nvme1n1" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd2", 00:13:25.868 "bdev_name": "nvme2n1" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd3", 00:13:25.868 "bdev_name": "nvme2n2" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd4", 00:13:25.868 "bdev_name": "nvme2n3" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd5", 00:13:25.868 "bdev_name": "nvme3n1" 00:13:25.868 } 00:13:25.868 ]' 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd0", 00:13:25.868 "bdev_name": "nvme0n1" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd1", 00:13:25.868 "bdev_name": "nvme1n1" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd2", 00:13:25.868 "bdev_name": "nvme2n1" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd3", 00:13:25.868 "bdev_name": "nvme2n2" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd4", 00:13:25.868 "bdev_name": "nvme2n3" 00:13:25.868 }, 00:13:25.868 { 00:13:25.868 "nbd_device": "/dev/nbd5", 00:13:25.868 "bdev_name": "nvme3n1" 00:13:25.868 } 00:13:25.868 ]' 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.868 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.126 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.385 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.643 09:25:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:26.643 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.900 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.503 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.761 09:25:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:28.020 /dev/nbd0 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.020 1+0 records in 00:13:28.020 1+0 records out 00:13:28.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372755 s, 11.0 MB/s 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.020 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:28.020 /dev/nbd1 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.277 1+0 records in 00:13:28.277 1+0 records out 00:13:28.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438804 s, 9.3 MB/s 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:28.277 /dev/nbd10 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.277 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.535 1+0 records in 00:13:28.535 1+0 records out 00:13:28.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448241 s, 9.1 MB/s 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:28.535 /dev/nbd11 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.535 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.535 1+0 records in 00:13:28.535 1+0 records out 00:13:28.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446985 s, 9.2 MB/s 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.794 09:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:28.794 /dev/nbd12 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.794 1+0 records in 00:13:28.794 1+0 records out 00:13:28.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500161 s, 8.2 MB/s 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.794 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:29.052 /dev/nbd13 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.052 1+0 records in 00:13:29.052 1+0 records out 00:13:29.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399893 s, 10.2 MB/s 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.052 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:29.310 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:29.310 { 00:13:29.310 "nbd_device": "/dev/nbd0", 00:13:29.310 "bdev_name": "nvme0n1" 00:13:29.310 }, 00:13:29.310 { 00:13:29.310 "nbd_device": "/dev/nbd1", 00:13:29.310 "bdev_name": "nvme1n1" 00:13:29.310 }, 00:13:29.310 { 00:13:29.310 "nbd_device": "/dev/nbd10", 00:13:29.310 "bdev_name": "nvme2n1" 00:13:29.310 }, 00:13:29.310 { 00:13:29.310 "nbd_device": "/dev/nbd11", 00:13:29.310 "bdev_name": "nvme2n2" 00:13:29.310 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd12", 00:13:29.311 "bdev_name": "nvme2n3" 00:13:29.311 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd13", 00:13:29.311 "bdev_name": "nvme3n1" 00:13:29.311 } 00:13:29.311 ]' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd0", 00:13:29.311 "bdev_name": "nvme0n1" 00:13:29.311 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd1", 00:13:29.311 "bdev_name": "nvme1n1" 00:13:29.311 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd10", 00:13:29.311 "bdev_name": "nvme2n1" 00:13:29.311 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd11", 00:13:29.311 "bdev_name": "nvme2n2" 00:13:29.311 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd12", 00:13:29.311 "bdev_name": "nvme2n3" 00:13:29.311 }, 00:13:29.311 { 00:13:29.311 "nbd_device": "/dev/nbd13", 00:13:29.311 "bdev_name": "nvme3n1" 00:13:29.311 } 00:13:29.311 ]' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:29.311 /dev/nbd1 00:13:29.311 /dev/nbd10 00:13:29.311 /dev/nbd11 00:13:29.311 /dev/nbd12 00:13:29.311 /dev/nbd13' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:29.311 /dev/nbd1 00:13:29.311 /dev/nbd10 00:13:29.311 /dev/nbd11 00:13:29.311 /dev/nbd12 00:13:29.311 /dev/nbd13' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:29.311 256+0 records in 00:13:29.311 256+0 records out 00:13:29.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631995 s, 166 MB/s 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.311 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:29.585 256+0 records in 00:13:29.585 256+0 records out 00:13:29.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101392 s, 10.3 MB/s 00:13:29.585 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.585 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:29.585 256+0 records in 00:13:29.585 256+0 records out 00:13:29.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0949884 s, 11.0 MB/s 00:13:29.585 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.585 09:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:29.585 256+0 records in 00:13:29.585 256+0 records out 00:13:29.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0892043 s, 11.8 MB/s 00:13:29.585 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.585 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:29.845 256+0 records in 00:13:29.845 256+0 records out 00:13:29.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0725085 s, 14.5 MB/s 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:29.845 256+0 records in 00:13:29.845 256+0 records out 00:13:29.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0790612 s, 13.3 MB/s 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:29.845 256+0 records in 00:13:29.845 256+0 records out 00:13:29.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0702956 s, 14.9 MB/s 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.845 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.846 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.103 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.360 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.618 09:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.948 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.207 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:31.470 09:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:31.732 malloc_lvol_verify 00:13:31.732 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:31.990 6679ad77-cd10-43c2-9e3d-4f42a5a8e95f 00:13:31.990 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:32.247 d74af35c-729c-4ce9-bb87-172ab1de586c 00:13:32.247 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:32.562 /dev/nbd0 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:32.562 mke2fs 1.47.0 (5-Feb-2023) 00:13:32.562 Discarding device blocks: 0/4096 done 00:13:32.562 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:32.562 00:13:32.562 Allocating group tables: 0/1 done 00:13:32.562 Writing inode tables: 0/1 done 00:13:32.562 Creating journal (1024 blocks): done 00:13:32.562 Writing superblocks and filesystem accounting information: 0/1 done 00:13:32.562 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.562 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:32.821 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.821 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.821 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.821 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.821 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.821 09:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69752 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69752 ']' 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69752 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69752 00:13:32.821 killing process with pid 69752 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69752' 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69752 00:13:32.821 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69752 00:13:33.386 09:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:33.386 00:13:33.386 real 0m10.301s 00:13:33.386 user 0m14.783s 00:13:33.386 sys 0m3.281s 00:13:33.386 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.386 ************************************ 00:13:33.386 END TEST bdev_nbd 00:13:33.386 ************************************ 00:13:33.386 09:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.386 09:25:58 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:33.386 09:25:58 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:33.386 09:25:58 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:33.387 09:25:58 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:33.387 09:25:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.387 09:25:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.387 09:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.387 ************************************ 00:13:33.387 START TEST bdev_fio 00:13:33.387 ************************************ 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:33.387 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:33.387 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.645 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:33.646 ************************************ 00:13:33.646 START TEST bdev_fio_rw_verify 00:13:33.646 ************************************ 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.646 09:25:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.646 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.646 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.646 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.646 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.646 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.646 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.646 fio-3.35 00:13:33.646 Starting 6 threads 00:13:45.924 00:13:45.924 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70159: Wed Nov 20 09:26:09 2024 00:13:45.924 read: IOPS=19.2k, BW=75.2MiB/s (78.8MB/s)(752MiB/10001msec) 00:13:45.924 slat (usec): min=2, max=3052, avg= 5.36, stdev=12.24 00:13:45.924 clat (usec): min=83, max=11617, avg=894.91, stdev=725.42 00:13:45.924 lat (usec): min=88, max=11621, avg=900.26, stdev=725.95 00:13:45.924 clat percentiles (usec): 00:13:45.924 | 50.000th=[ 668], 99.000th=[ 3359], 99.900th=[ 4948], 99.990th=[ 6849], 00:13:45.924 | 99.999th=[11600] 00:13:45.924 write: IOPS=19.6k, BW=76.4MiB/s (80.1MB/s)(764MiB/10001msec); 0 zone resets 00:13:45.924 slat (usec): min=9, max=6157, avg=38.48, stdev=127.63 00:13:45.924 clat (usec): min=66, max=360974, avg=1267.11, stdev=5690.36 00:13:45.924 lat (usec): min=80, max=361063, avg=1305.59, stdev=5692.25 00:13:45.924 clat percentiles (usec): 00:13:45.924 | 50.000th=[ 930], 99.000th=[ 4015], 99.900th=[ 6718], 00:13:45.924 | 99.990th=[358613], 99.999th=[362808] 00:13:45.924 bw ( KiB/s): min=25712, max=141423, per=100.00%, avg=79492.95, stdev=4822.39, samples=114 00:13:45.924 iops : min= 6426, max=35355, avg=19871.95, stdev=1205.67, samples=114 00:13:45.924 lat (usec) : 100=0.03%, 250=8.10%, 500=19.33%, 750=18.99%, 1000=13.81% 00:13:45.924 lat (msec) : 2=28.52%, 4=10.51%, 10=0.67%, 20=0.01%, 50=0.02% 00:13:45.924 lat (msec) : 500=0.01% 00:13:45.924 cpu : usr=45.43%, sys=30.68%, ctx=6497, majf=0, minf=17999 00:13:45.924 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.924 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.924 issued rwts: total=192501,195670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.924 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:45.924 00:13:45.924 Run status group 0 (all jobs): 00:13:45.924 READ: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=752MiB (788MB), run=10001-10001msec 00:13:45.924 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=764MiB (801MB), run=10001-10001msec 00:13:45.924 ----------------------------------------------------- 00:13:45.924 Suppressions used: 00:13:45.924 count bytes template 00:13:45.924 6 48 /usr/src/fio/parse.c 00:13:45.924 3036 291456 /usr/src/fio/iolog.c 00:13:45.925 1 8 libtcmalloc_minimal.so 00:13:45.925 1 904 libcrypto.so 00:13:45.925 ----------------------------------------------------- 00:13:45.925 00:13:45.925 00:13:45.925 real 0m11.822s 00:13:45.925 user 0m28.669s 00:13:45.925 sys 0m18.696s 00:13:45.925 ************************************ 00:13:45.925 END TEST bdev_fio_rw_verify 00:13:45.925 ************************************ 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "000b5e90-6fff-4a15-8a9c-34a44ae4c779"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "000b5e90-6fff-4a15-8a9c-34a44ae4c779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c0994509-dbd4-46e1-b9e7-d4499ab3b524"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c0994509-dbd4-46e1-b9e7-d4499ab3b524",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "257286cd-5084-4385-837c-17982b483def"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "257286cd-5084-4385-837c-17982b483def",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d598fec6-8a60-46d0-8704-98a02d993351"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d598fec6-8a60-46d0-8704-98a02d993351",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "01f3a77c-b125-4c64-8405-8714eed34a5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "01f3a77c-b125-4c64-8405-8714eed34a5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d8fd4ca3-edbf-4d92-a323-6fe009734743"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8fd4ca3-edbf-4d92-a323-6fe009734743",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.925 /home/vagrant/spdk_repo/spdk 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:45.925 00:13:45.925 real 0m11.983s 00:13:45.925 user 0m28.755s 00:13:45.925 sys 0m18.760s 00:13:45.925 ************************************ 00:13:45.925 END TEST bdev_fio 00:13:45.925 ************************************ 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.925 09:26:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:45.925 09:26:10 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:45.925 09:26:10 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:45.925 09:26:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:45.925 09:26:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.925 09:26:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.925 ************************************ 00:13:45.925 START TEST bdev_verify 00:13:45.925 ************************************ 00:13:45.925 09:26:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:45.925 [2024-11-20 09:26:10.939738] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:45.925 [2024-11-20 09:26:10.939865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70328 ] 00:13:45.925 [2024-11-20 09:26:11.099013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:45.925 [2024-11-20 09:26:11.206987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.925 [2024-11-20 09:26:11.207097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.186 Running I/O for 5 seconds... 00:13:48.491 23968.00 IOPS, 93.62 MiB/s [2024-11-20T09:26:14.883Z] 22464.00 IOPS, 87.75 MiB/s [2024-11-20T09:26:16.256Z] 22570.67 IOPS, 88.17 MiB/s [2024-11-20T09:26:16.827Z] 22120.00 IOPS, 86.41 MiB/s [2024-11-20T09:26:16.827Z] 21958.40 IOPS, 85.77 MiB/s 00:13:51.371 Latency(us) 00:13:51.371 [2024-11-20T09:26:16.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.371 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.371 Verification LBA range: start 0x0 length 0xa0000 00:13:51.372 nvme0n1 : 5.07 1766.38 6.90 0.00 0.00 72314.19 6276.33 71787.13 00:13:51.372 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0xa0000 length 0xa0000 00:13:51.372 nvme0n1 : 5.03 1703.70 6.66 0.00 0.00 74981.26 5696.59 136314.88 00:13:51.372 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x0 length 0xbd0bd 00:13:51.372 nvme1n1 : 5.07 2172.64 8.49 0.00 0.00 58581.56 3755.72 60494.77 00:13:51.372 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:51.372 nvme1n1 : 5.04 2082.90 8.14 0.00 0.00 61166.20 3730.51 114536.76 00:13:51.372 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x0 length 0x80000 00:13:51.372 nvme2n1 : 5.07 1791.10 7.00 0.00 0.00 70782.51 11796.48 65737.65 00:13:51.372 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x80000 length 0x80000 00:13:51.372 nvme2n1 : 5.06 1745.52 6.82 0.00 0.00 72940.22 8872.57 111310.38 00:13:51.372 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x0 length 0x80000 00:13:51.372 nvme2n2 : 5.06 1769.73 6.91 0.00 0.00 71462.26 9225.45 69770.63 00:13:51.372 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x80000 length 0x80000 00:13:51.372 nvme2n2 : 5.05 1723.21 6.73 0.00 0.00 73653.69 7108.14 98404.82 00:13:51.372 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x0 length 0x80000 00:13:51.372 nvme2n3 : 5.08 1789.84 6.99 0.00 0.00 70532.53 9376.69 66947.54 00:13:51.372 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x80000 length 0x80000 00:13:51.372 nvme2n3 : 5.06 1719.72 6.72 0.00 0.00 73596.22 9527.93 113730.17 00:13:51.372 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x0 length 0x20000 00:13:51.372 nvme3n1 : 5.09 1786.86 6.98 0.00 0.00 70591.57 4083.40 71787.13 00:13:51.372 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.372 Verification LBA range: start 0x20000 length 0x20000 00:13:51.372 nvme3n1 : 5.07 1717.74 6.71 0.00 0.00 73535.79 4108.60 136314.88 00:13:51.372 [2024-11-20T09:26:16.828Z] =================================================================================================================== 00:13:51.372 [2024-11-20T09:26:16.828Z] Total : 21769.33 85.04 0.00 0.00 69957.81 3730.51 136314.88 00:13:52.312 00:13:52.312 real 0m6.627s 00:13:52.312 user 0m11.013s 00:13:52.312 sys 0m1.199s 00:13:52.312 09:26:17 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.312 09:26:17 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:52.312 ************************************ 00:13:52.312 END TEST bdev_verify 00:13:52.312 ************************************ 00:13:52.312 09:26:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:52.312 09:26:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:52.312 09:26:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.312 09:26:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.312 ************************************ 00:13:52.312 START TEST bdev_verify_big_io 00:13:52.312 ************************************ 00:13:52.312 09:26:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:52.312 [2024-11-20 09:26:17.633933] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:52.312 [2024-11-20 09:26:17.634075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70422 ] 00:13:52.572 [2024-11-20 09:26:17.795704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:52.572 [2024-11-20 09:26:17.924554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.572 [2024-11-20 09:26:17.924694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.141 Running I/O for 5 seconds... 00:13:59.230 1280.00 IOPS, 80.00 MiB/s [2024-11-20T09:26:24.686Z] 2712.00 IOPS, 169.50 MiB/s 00:13:59.230 Latency(us) 00:13:59.230 [2024-11-20T09:26:24.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.230 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x0 length 0xa000 00:13:59.230 nvme0n1 : 6.03 107.46 6.72 0.00 0.00 1144279.25 21173.17 1096971.82 00:13:59.230 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0xa000 length 0xa000 00:13:59.230 nvme0n1 : 6.07 105.36 6.59 0.00 0.00 1173178.74 5772.21 1148594.02 00:13:59.230 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x0 length 0xbd0b 00:13:59.230 nvme1n1 : 6.03 127.28 7.96 0.00 0.00 948212.97 76223.41 1200216.22 00:13:59.230 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:59.230 nvme1n1 : 6.08 123.76 7.74 0.00 0.00 957602.97 54848.59 1355082.83 00:13:59.230 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x0 length 0x8000 00:13:59.230 nvme2n1 : 6.04 95.76 5.98 0.00 0.00 1217706.89 8469.27 2271376.94 00:13:59.230 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x8000 length 0x8000 00:13:59.230 nvme2n1 : 6.07 102.85 6.43 0.00 0.00 1111070.46 79853.10 1058255.16 00:13:59.230 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x0 length 0x8000 00:13:59.230 nvme2n2 : 6.05 76.70 4.79 0.00 0.00 1493952.22 83886.08 3187671.04 00:13:59.230 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x8000 length 0x8000 00:13:59.230 nvme2n2 : 6.08 123.72 7.73 0.00 0.00 911636.06 51017.26 1871304.86 00:13:59.230 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x0 length 0x8000 00:13:59.230 nvme2n3 : 6.04 103.25 6.45 0.00 0.00 1071114.18 79449.80 2361715.79 00:13:59.230 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x8000 length 0x8000 00:13:59.230 nvme2n3 : 6.08 94.73 5.92 0.00 0.00 1149652.33 37305.11 2000360.37 00:13:59.230 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x0 length 0x2000 00:13:59.230 nvme3n1 : 6.05 156.09 9.76 0.00 0.00 684791.91 6326.74 1084066.26 00:13:59.230 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:59.230 Verification LBA range: start 0x2000 length 0x2000 00:13:59.231 nvme3n1 : 6.08 113.09 7.07 0.00 0.00 929524.90 1991.29 1535760.54 00:13:59.231 [2024-11-20T09:26:24.687Z] =================================================================================================================== 00:13:59.231 [2024-11-20T09:26:24.687Z] Total : 1330.06 83.13 0.00 0.00 1034138.83 1991.29 3187671.04 00:14:00.163 00:14:00.163 real 0m7.896s 00:14:00.163 user 0m14.465s 00:14:00.163 sys 0m0.461s 00:14:00.163 09:26:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.163 ************************************ 00:14:00.163 END TEST bdev_verify_big_io 00:14:00.163 ************************************ 00:14:00.163 09:26:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.163 09:26:25 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.163 09:26:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:00.163 09:26:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.163 09:26:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.163 ************************************ 00:14:00.163 START TEST bdev_write_zeroes 00:14:00.163 ************************************ 00:14:00.163 09:26:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.163 [2024-11-20 09:26:25.574253] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:00.163 [2024-11-20 09:26:25.574385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70544 ] 00:14:00.421 [2024-11-20 09:26:25.732153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.421 [2024-11-20 09:26:25.834418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.987 Running I/O for 1 seconds... 00:14:01.921 60192.00 IOPS, 235.12 MiB/s 00:14:01.921 Latency(us) 00:14:01.921 [2024-11-20T09:26:27.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.921 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:01.921 nvme0n1 : 1.03 9102.10 35.56 0.00 0.00 14049.95 7057.72 24197.91 00:14:01.921 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:01.921 nvme1n1 : 1.02 14220.03 55.55 0.00 0.00 8984.02 5091.64 23391.31 00:14:01.921 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:01.921 nvme2n1 : 1.02 9149.97 35.74 0.00 0.00 13954.53 8418.86 23996.26 00:14:01.921 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:01.921 nvme2n2 : 1.02 9134.68 35.68 0.00 0.00 13895.11 5192.47 24601.21 00:14:01.921 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:01.921 nvme2n3 : 1.03 9086.77 35.50 0.00 0.00 13960.15 5318.50 25004.50 00:14:01.921 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:01.921 nvme3n1 : 1.02 9117.41 35.61 0.00 0.00 13895.74 5041.23 25811.10 00:14:01.921 [2024-11-20T09:26:27.377Z] =================================================================================================================== 00:14:01.921 [2024-11-20T09:26:27.377Z] Total : 59810.96 233.64 0.00 0.00 12774.89 5041.23 25811.10 00:14:02.855 00:14:02.855 real 0m2.482s 00:14:02.855 user 0m1.790s 00:14:02.855 sys 0m0.494s 00:14:02.855 09:26:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.855 09:26:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:02.855 ************************************ 00:14:02.855 END TEST bdev_write_zeroes 00:14:02.855 ************************************ 00:14:02.855 09:26:28 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.855 09:26:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:02.855 09:26:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.855 09:26:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:02.855 ************************************ 00:14:02.855 START TEST bdev_json_nonenclosed 00:14:02.855 ************************************ 00:14:02.855 09:26:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.855 [2024-11-20 09:26:28.140671] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:02.855 [2024-11-20 09:26:28.140792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70597 ] 00:14:02.855 [2024-11-20 09:26:28.300758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.112 [2024-11-20 09:26:28.468367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.112 [2024-11-20 09:26:28.468461] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:03.112 [2024-11-20 09:26:28.468482] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:03.112 [2024-11-20 09:26:28.468492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.370 00:14:03.370 real 0m0.580s 00:14:03.370 user 0m0.394s 00:14:03.370 sys 0m0.081s 00:14:03.370 ************************************ 00:14:03.370 09:26:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.370 09:26:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:03.370 END TEST bdev_json_nonenclosed 00:14:03.370 ************************************ 00:14:03.370 09:26:28 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.370 09:26:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:03.370 09:26:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.370 09:26:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.370 ************************************ 00:14:03.370 START TEST bdev_json_nonarray 00:14:03.370 ************************************ 00:14:03.370 09:26:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.370 [2024-11-20 09:26:28.754136] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:03.370 [2024-11-20 09:26:28.754262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70617 ] 00:14:03.629 [2024-11-20 09:26:28.912701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.629 [2024-11-20 09:26:29.015516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.629 [2024-11-20 09:26:29.015609] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:03.629 [2024-11-20 09:26:29.015627] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:03.629 [2024-11-20 09:26:29.015636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.894 00:14:03.894 real 0m0.507s 00:14:03.894 user 0m0.304s 00:14:03.894 sys 0m0.098s 00:14:03.894 09:26:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.894 ************************************ 00:14:03.894 END TEST bdev_json_nonarray 00:14:03.894 ************************************ 00:14:03.894 09:26:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:03.894 09:26:29 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:04.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:26.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:26.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.558 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.558 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.558 00:14:34.558 real 1m22.851s 00:14:34.558 user 1m27.966s 00:14:34.559 sys 1m20.369s 00:14:34.559 09:26:59 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.559 09:26:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.559 ************************************ 00:14:34.559 END TEST blockdev_xnvme 00:14:34.559 ************************************ 00:14:34.559 09:26:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:34.559 09:26:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.559 09:26:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.559 09:26:59 -- common/autotest_common.sh@10 -- # set +x 00:14:34.559 ************************************ 00:14:34.559 START TEST ublk 00:14:34.559 ************************************ 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:34.559 * Looking for test storage... 00:14:34.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.559 09:26:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.559 09:26:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.559 09:26:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.559 09:26:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.559 09:26:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.559 09:26:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:34.559 09:26:59 ublk -- scripts/common.sh@345 -- # : 1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.559 09:26:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.559 09:26:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@353 -- # local d=1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.559 09:26:59 ublk -- scripts/common.sh@355 -- # echo 1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.559 09:26:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@353 -- # local d=2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.559 09:26:59 ublk -- scripts/common.sh@355 -- # echo 2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.559 09:26:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.559 09:26:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.559 09:26:59 ublk -- scripts/common.sh@368 -- # return 0 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.559 --rc genhtml_branch_coverage=1 00:14:34.559 --rc genhtml_function_coverage=1 00:14:34.559 --rc genhtml_legend=1 00:14:34.559 --rc geninfo_all_blocks=1 00:14:34.559 --rc geninfo_unexecuted_blocks=1 00:14:34.559 00:14:34.559 ' 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.559 --rc genhtml_branch_coverage=1 00:14:34.559 --rc genhtml_function_coverage=1 00:14:34.559 --rc genhtml_legend=1 00:14:34.559 --rc geninfo_all_blocks=1 00:14:34.559 --rc geninfo_unexecuted_blocks=1 00:14:34.559 00:14:34.559 ' 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.559 --rc genhtml_branch_coverage=1 00:14:34.559 --rc genhtml_function_coverage=1 00:14:34.559 --rc genhtml_legend=1 00:14:34.559 --rc geninfo_all_blocks=1 00:14:34.559 --rc geninfo_unexecuted_blocks=1 00:14:34.559 00:14:34.559 ' 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.559 --rc genhtml_branch_coverage=1 00:14:34.559 --rc genhtml_function_coverage=1 00:14:34.559 --rc genhtml_legend=1 00:14:34.559 --rc geninfo_all_blocks=1 00:14:34.559 --rc geninfo_unexecuted_blocks=1 00:14:34.559 00:14:34.559 ' 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:34.559 09:26:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:34.559 09:26:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:34.559 09:26:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:34.559 09:26:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:34.559 09:26:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:34.559 09:26:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:34.559 09:26:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:34.559 09:26:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:34.559 09:26:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.559 09:26:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.559 ************************************ 00:14:34.559 START TEST test_save_ublk_config 00:14:34.559 ************************************ 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70933 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70933 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70933 ']' 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:34.559 09:26:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:34.559 [2024-11-20 09:26:59.555488] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:34.559 [2024-11-20 09:26:59.555612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70933 ] 00:14:34.559 [2024-11-20 09:26:59.714440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.559 [2024-11-20 09:26:59.814060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.124 09:27:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.124 09:27:00 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:35.124 09:27:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:35.124 09:27:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:35.124 09:27:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.124 09:27:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 [2024-11-20 09:27:03.385385] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:38.409 [2024-11-20 09:27:03.386228] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:38.409 malloc0 00:14:38.409 [2024-11-20 09:27:03.418360] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:38.409 [2024-11-20 09:27:03.418438] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:38.409 [2024-11-20 09:27:03.418449] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:38.409 [2024-11-20 09:27:03.418456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:40.360 [2024-11-20 09:27:05.699730] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:40.360 [2024-11-20 09:27:05.699764] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:41.740 [2024-11-20 09:27:06.969377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:41.740 [2024-11-20 09:27:06.969503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:44.313 [2024-11-20 09:27:09.378329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:44.313 0 00:14:44.313 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.313 09:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:44.313 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.313 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.313 09:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:44.313 "subsystems": [ 00:14:44.313 { 00:14:44.313 "subsystem": "fsdev", 00:14:44.313 "config": [ 00:14:44.313 { 00:14:44.313 "method": "fsdev_set_opts", 00:14:44.313 "params": { 00:14:44.313 "fsdev_io_pool_size": 65535, 00:14:44.313 "fsdev_io_cache_size": 256 00:14:44.313 } 00:14:44.313 } 00:14:44.313 ] 00:14:44.313 }, 00:14:44.313 { 00:14:44.313 "subsystem": "keyring", 00:14:44.313 "config": [] 00:14:44.313 }, 00:14:44.313 { 00:14:44.313 "subsystem": "iobuf", 00:14:44.313 "config": [ 00:14:44.313 { 00:14:44.313 "method": "iobuf_set_options", 00:14:44.313 "params": { 00:14:44.313 "small_pool_count": 8192, 00:14:44.313 "large_pool_count": 1024, 00:14:44.313 "small_bufsize": 8192, 00:14:44.314 "large_bufsize": 135168, 00:14:44.314 "enable_numa": false 00:14:44.314 } 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "sock", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "sock_set_default_impl", 00:14:44.314 "params": { 00:14:44.314 "impl_name": "posix" 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "sock_impl_set_options", 00:14:44.314 "params": { 00:14:44.314 "impl_name": "ssl", 00:14:44.314 "recv_buf_size": 4096, 00:14:44.314 "send_buf_size": 4096, 00:14:44.314 "enable_recv_pipe": true, 00:14:44.314 "enable_quickack": false, 00:14:44.314 "enable_placement_id": 0, 00:14:44.314 "enable_zerocopy_send_server": true, 00:14:44.314 "enable_zerocopy_send_client": false, 00:14:44.314 "zerocopy_threshold": 0, 00:14:44.314 "tls_version": 0, 00:14:44.314 "enable_ktls": false 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "sock_impl_set_options", 00:14:44.314 "params": { 00:14:44.314 "impl_name": "posix", 00:14:44.314 "recv_buf_size": 2097152, 00:14:44.314 "send_buf_size": 2097152, 00:14:44.314 "enable_recv_pipe": true, 00:14:44.314 "enable_quickack": false, 00:14:44.314 "enable_placement_id": 0, 00:14:44.314 "enable_zerocopy_send_server": true, 00:14:44.314 "enable_zerocopy_send_client": false, 00:14:44.314 "zerocopy_threshold": 0, 00:14:44.314 "tls_version": 0, 00:14:44.314 "enable_ktls": false 00:14:44.314 } 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "vmd", 00:14:44.314 "config": [] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "accel", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "accel_set_options", 00:14:44.314 "params": { 00:14:44.314 "small_cache_size": 128, 00:14:44.314 "large_cache_size": 16, 00:14:44.314 "task_count": 2048, 00:14:44.314 "sequence_count": 2048, 00:14:44.314 "buf_count": 2048 00:14:44.314 } 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "bdev", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "bdev_set_options", 00:14:44.314 "params": { 00:14:44.314 "bdev_io_pool_size": 65535, 00:14:44.314 "bdev_io_cache_size": 256, 00:14:44.314 "bdev_auto_examine": true, 00:14:44.314 "iobuf_small_cache_size": 128, 00:14:44.314 "iobuf_large_cache_size": 16 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "bdev_raid_set_options", 00:14:44.314 "params": { 00:14:44.314 "process_window_size_kb": 1024, 00:14:44.314 "process_max_bandwidth_mb_sec": 0 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "bdev_iscsi_set_options", 00:14:44.314 "params": { 00:14:44.314 "timeout_sec": 30 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "bdev_nvme_set_options", 00:14:44.314 "params": { 00:14:44.314 "action_on_timeout": "none", 00:14:44.314 "timeout_us": 0, 00:14:44.314 "timeout_admin_us": 0, 00:14:44.314 "keep_alive_timeout_ms": 10000, 00:14:44.314 "arbitration_burst": 0, 00:14:44.314 "low_priority_weight": 0, 00:14:44.314 "medium_priority_weight": 0, 00:14:44.314 "high_priority_weight": 0, 00:14:44.314 "nvme_adminq_poll_period_us": 10000, 00:14:44.314 "nvme_ioq_poll_period_us": 0, 00:14:44.314 "io_queue_requests": 0, 00:14:44.314 "delay_cmd_submit": true, 00:14:44.314 "transport_retry_count": 4, 00:14:44.314 "bdev_retry_count": 3, 00:14:44.314 "transport_ack_timeout": 0, 00:14:44.314 "ctrlr_loss_timeout_sec": 0, 00:14:44.314 "reconnect_delay_sec": 0, 00:14:44.314 "fast_io_fail_timeout_sec": 0, 00:14:44.314 "disable_auto_failback": false, 00:14:44.314 "generate_uuids": false, 00:14:44.314 "transport_tos": 0, 00:14:44.314 "nvme_error_stat": false, 00:14:44.314 "rdma_srq_size": 0, 00:14:44.314 "io_path_stat": false, 00:14:44.314 "allow_accel_sequence": false, 00:14:44.314 "rdma_max_cq_size": 0, 00:14:44.314 "rdma_cm_event_timeout_ms": 0, 00:14:44.314 "dhchap_digests": [ 00:14:44.314 "sha256", 00:14:44.314 "sha384", 00:14:44.314 "sha512" 00:14:44.314 ], 00:14:44.314 "dhchap_dhgroups": [ 00:14:44.314 "null", 00:14:44.314 "ffdhe2048", 00:14:44.314 "ffdhe3072", 00:14:44.314 "ffdhe4096", 00:14:44.314 "ffdhe6144", 00:14:44.314 "ffdhe8192" 00:14:44.314 ] 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "bdev_nvme_set_hotplug", 00:14:44.314 "params": { 00:14:44.314 "period_us": 100000, 00:14:44.314 "enable": false 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "bdev_malloc_create", 00:14:44.314 "params": { 00:14:44.314 "name": "malloc0", 00:14:44.314 "num_blocks": 8192, 00:14:44.314 "block_size": 4096, 00:14:44.314 "physical_block_size": 4096, 00:14:44.314 "uuid": "22769644-d03e-495e-ac85-5bda15901af4", 00:14:44.314 "optimal_io_boundary": 0, 00:14:44.314 "md_size": 0, 00:14:44.314 "dif_type": 0, 00:14:44.314 "dif_is_head_of_md": false, 00:14:44.314 "dif_pi_format": 0 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "bdev_wait_for_examine" 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "scsi", 00:14:44.314 "config": null 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "scheduler", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "framework_set_scheduler", 00:14:44.314 "params": { 00:14:44.314 "name": "static" 00:14:44.314 } 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "vhost_scsi", 00:14:44.314 "config": [] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "vhost_blk", 00:14:44.314 "config": [] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "ublk", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "ublk_create_target", 00:14:44.314 "params": { 00:14:44.314 "cpumask": "1" 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "ublk_start_disk", 00:14:44.314 "params": { 00:14:44.314 "bdev_name": "malloc0", 00:14:44.314 "ublk_id": 0, 00:14:44.314 "num_queues": 1, 00:14:44.314 "queue_depth": 128 00:14:44.314 } 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "nbd", 00:14:44.314 "config": [] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "nvmf", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "nvmf_set_config", 00:14:44.314 "params": { 00:14:44.314 "discovery_filter": "match_any", 00:14:44.314 "admin_cmd_passthru": { 00:14:44.314 "identify_ctrlr": false 00:14:44.314 }, 00:14:44.314 "dhchap_digests": [ 00:14:44.314 "sha256", 00:14:44.314 "sha384", 00:14:44.314 "sha512" 00:14:44.314 ], 00:14:44.314 "dhchap_dhgroups": [ 00:14:44.314 "null", 00:14:44.314 "ffdhe2048", 00:14:44.314 "ffdhe3072", 00:14:44.314 "ffdhe4096", 00:14:44.314 "ffdhe6144", 00:14:44.314 "ffdhe8192" 00:14:44.314 ] 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "nvmf_set_max_subsystems", 00:14:44.314 "params": { 00:14:44.314 "max_subsystems": 1024 00:14:44.314 } 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "method": "nvmf_set_crdt", 00:14:44.314 "params": { 00:14:44.314 "crdt1": 0, 00:14:44.314 "crdt2": 0, 00:14:44.314 "crdt3": 0 00:14:44.314 } 00:14:44.314 } 00:14:44.314 ] 00:14:44.314 }, 00:14:44.314 { 00:14:44.314 "subsystem": "iscsi", 00:14:44.314 "config": [ 00:14:44.314 { 00:14:44.314 "method": "iscsi_set_options", 00:14:44.314 "params": { 00:14:44.314 "node_base": "iqn.2016-06.io.spdk", 00:14:44.314 "max_sessions": 128, 00:14:44.314 "max_connections_per_session": 2, 00:14:44.314 "max_queue_depth": 64, 00:14:44.314 "default_time2wait": 2, 00:14:44.314 "default_time2retain": 20, 00:14:44.314 "first_burst_length": 8192, 00:14:44.314 "immediate_data": true, 00:14:44.314 "allow_duplicated_isid": false, 00:14:44.314 "error_recovery_level": 0, 00:14:44.315 "nop_timeout": 60, 00:14:44.315 "nop_in_interval": 30, 00:14:44.315 "disable_chap": false, 00:14:44.315 "require_chap": false, 00:14:44.315 "mutual_chap": false, 00:14:44.315 "chap_group": 0, 00:14:44.315 "max_large_datain_per_connection": 64, 00:14:44.315 "max_r2t_per_connection": 4, 00:14:44.315 "pdu_pool_size": 36864, 00:14:44.315 "immediate_data_pool_size": 16384, 00:14:44.315 "data_out_pool_size": 2048 00:14:44.315 } 00:14:44.315 } 00:14:44.315 ] 00:14:44.315 } 00:14:44.315 ] 00:14:44.315 }' 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70933 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70933 ']' 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70933 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70933 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.315 killing process with pid 70933 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70933' 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70933 00:14:44.315 09:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70933 00:14:45.253 [2024-11-20 09:27:10.594358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:45.253 [2024-11-20 09:27:10.635396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:45.253 [2024-11-20 09:27:10.635518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:45.253 [2024-11-20 09:27:10.643346] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:45.253 [2024-11-20 09:27:10.643394] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:45.253 [2024-11-20 09:27:10.643404] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:45.253 [2024-11-20 09:27:10.643429] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:45.253 [2024-11-20 09:27:10.643551] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71079 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71079 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 71079 ']' 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 09:27:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:46.649 "subsystems": [ 00:14:46.649 { 00:14:46.649 "subsystem": "fsdev", 00:14:46.649 "config": [ 00:14:46.649 { 00:14:46.649 "method": "fsdev_set_opts", 00:14:46.649 "params": { 00:14:46.649 "fsdev_io_pool_size": 65535, 00:14:46.649 "fsdev_io_cache_size": 256 00:14:46.649 } 00:14:46.649 } 00:14:46.649 ] 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "subsystem": "keyring", 00:14:46.649 "config": [] 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "subsystem": "iobuf", 00:14:46.649 "config": [ 00:14:46.649 { 00:14:46.649 "method": "iobuf_set_options", 00:14:46.649 "params": { 00:14:46.649 "small_pool_count": 8192, 00:14:46.649 "large_pool_count": 1024, 00:14:46.649 "small_bufsize": 8192, 00:14:46.649 "large_bufsize": 135168, 00:14:46.649 "enable_numa": false 00:14:46.649 } 00:14:46.649 } 00:14:46.649 ] 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "subsystem": "sock", 00:14:46.649 "config": [ 00:14:46.649 { 00:14:46.649 "method": "sock_set_default_impl", 00:14:46.649 "params": { 00:14:46.649 "impl_name": "posix" 00:14:46.649 } 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "method": "sock_impl_set_options", 00:14:46.649 "params": { 00:14:46.649 "impl_name": "ssl", 00:14:46.649 "recv_buf_size": 4096, 00:14:46.649 "send_buf_size": 4096, 00:14:46.649 "enable_recv_pipe": true, 00:14:46.649 "enable_quickack": false, 00:14:46.649 "enable_placement_id": 0, 00:14:46.649 "enable_zerocopy_send_server": true, 00:14:46.649 "enable_zerocopy_send_client": false, 00:14:46.649 "zerocopy_threshold": 0, 00:14:46.649 "tls_version": 0, 00:14:46.649 "enable_ktls": false 00:14:46.649 } 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "method": "sock_impl_set_options", 00:14:46.649 "params": { 00:14:46.649 "impl_name": "posix", 00:14:46.649 "recv_buf_size": 2097152, 00:14:46.649 "send_buf_size": 2097152, 00:14:46.649 "enable_recv_pipe": true, 00:14:46.649 "enable_quickack": false, 00:14:46.649 "enable_placement_id": 0, 00:14:46.649 "enable_zerocopy_send_server": true, 00:14:46.649 "enable_zerocopy_send_client": false, 00:14:46.649 "zerocopy_threshold": 0, 00:14:46.649 "tls_version": 0, 00:14:46.649 "enable_ktls": false 00:14:46.649 } 00:14:46.649 } 00:14:46.649 ] 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "subsystem": "vmd", 00:14:46.649 "config": [] 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "subsystem": "accel", 00:14:46.649 "config": [ 00:14:46.649 { 00:14:46.649 "method": "accel_set_options", 00:14:46.649 "params": { 00:14:46.649 "small_cache_size": 128, 00:14:46.649 "large_cache_size": 16, 00:14:46.649 "task_count": 2048, 00:14:46.649 "sequence_count": 2048, 00:14:46.649 "buf_count": 2048 00:14:46.649 } 00:14:46.649 } 00:14:46.649 ] 00:14:46.649 }, 00:14:46.649 { 00:14:46.650 "subsystem": "bdev", 00:14:46.650 "config": [ 00:14:46.650 { 00:14:46.650 "method": "bdev_set_options", 00:14:46.650 "params": { 00:14:46.650 "bdev_io_pool_size": 65535, 00:14:46.650 "bdev_io_cache_size": 256, 00:14:46.650 "bdev_auto_examine": true, 00:14:46.650 "iobuf_small_cache_size": 128, 00:14:46.650 "iobuf_large_cache_size": 16 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "bdev_raid_set_options", 00:14:46.650 "params": { 00:14:46.650 "process_window_size_kb": 1024, 00:14:46.650 "process_max_bandwidth_mb_sec": 0 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "bdev_iscsi_set_options", 00:14:46.650 "params": { 00:14:46.650 "timeout_sec": 30 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "bdev_nvme_set_options", 00:14:46.650 "params": { 00:14:46.650 "action_on_timeout": "none", 00:14:46.650 "timeout_us": 0, 00:14:46.650 "timeout_admin_us": 0, 00:14:46.650 "keep_alive_timeout_ms": 10000, 00:14:46.650 "arbitration_burst": 0, 00:14:46.650 "low_priority_weight": 0, 00:14:46.650 "medium_priority_weight": 0, 00:14:46.650 "high_priority_weight": 0, 00:14:46.650 "nvme_adminq_poll_period_us": 10000, 00:14:46.650 "nvme_ioq_poll_period_us": 0, 00:14:46.650 "io_queue_requests": 0, 00:14:46.650 "delay_cmd_submit": true, 00:14:46.650 "transport_retry_count": 4, 00:14:46.650 "bdev_retry_count": 3, 00:14:46.650 "transport_ack_timeout": 0, 00:14:46.650 "ctrlr_loss_timeout_sec": 0, 00:14:46.650 "reconnect_delay_sec": 0, 00:14:46.650 "fast_io_fail_timeout_sec": 0, 00:14:46.650 "disable_auto_failback": false, 00:14:46.650 "generate_uuids": false, 00:14:46.650 "transport_tos": 0, 00:14:46.650 "nvme_error_stat": false, 00:14:46.650 "rdma_srq_size": 0, 00:14:46.650 "io_path_stat": false, 00:14:46.650 "allow_accel_sequence": false, 00:14:46.650 "rdma_max_cq_size": 0, 00:14:46.650 "rdma_cm_event_timeout_ms": 0, 00:14:46.650 "dhchap_digests": [ 00:14:46.650 "sha256", 00:14:46.650 "sha384", 00:14:46.650 "sha512" 00:14:46.650 ], 00:14:46.650 "dhchap_dhgroups": [ 00:14:46.650 "null", 00:14:46.650 "ffdhe2048", 00:14:46.650 "ffdhe3072", 00:14:46.650 "ffdhe4096", 00:14:46.650 "ffdhe6144", 00:14:46.650 "ffdhe8192" 00:14:46.650 ] 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "bdev_nvme_set_hotplug", 00:14:46.650 "params": { 00:14:46.650 "period_us": 100000, 00:14:46.650 "enable": false 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "bdev_malloc_create", 00:14:46.650 "params": { 00:14:46.650 "name": "malloc0", 00:14:46.650 "num_blocks": 8192, 00:14:46.650 "block_size": 4096, 00:14:46.650 "physical_block_size": 4096, 00:14:46.650 "uuid": "22769644-d03e-495e-ac85-5bda15901af4", 00:14:46.650 "optimal_io_boundary": 0, 00:14:46.650 "md_size": 0, 00:14:46.650 "dif_type": 0, 00:14:46.650 "dif_is_head_of_md": false, 00:14:46.650 "dif_pi_format": 0 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "bdev_wait_for_examine" 00:14:46.650 } 00:14:46.650 ] 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "scsi", 00:14:46.650 "config": null 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "scheduler", 00:14:46.650 "config": [ 00:14:46.650 { 00:14:46.650 "method": "framework_set_scheduler", 00:14:46.650 "params": { 00:14:46.650 "name": "static" 00:14:46.650 } 00:14:46.650 } 00:14:46.650 ] 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "vhost_scsi", 00:14:46.650 "config": [] 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "vhost_blk", 00:14:46.650 "config": [] 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "ublk", 00:14:46.650 "config": [ 00:14:46.650 { 00:14:46.650 "method": "ublk_create_target", 00:14:46.650 "params": { 00:14:46.650 "cpumask": "1" 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "ublk_start_disk", 00:14:46.650 "params": { 00:14:46.650 "bdev_name": "malloc0", 00:14:46.650 "ublk_id": 0, 00:14:46.650 "num_queues": 1, 00:14:46.650 "queue_depth": 128 00:14:46.650 } 00:14:46.650 } 00:14:46.650 ] 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "nbd", 00:14:46.650 "config": [] 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "subsystem": "nvmf", 00:14:46.650 "config": [ 00:14:46.650 { 00:14:46.650 "method": "nvmf_set_config", 00:14:46.650 "params": { 00:14:46.650 "discovery_filter": "match_any", 00:14:46.650 "admin_cmd_passthru": { 00:14:46.650 "identify_ctrlr": false 00:14:46.650 }, 00:14:46.650 "dhchap_digests": [ 00:14:46.650 "sha256", 00:14:46.650 "sha384", 00:14:46.650 "sha512" 00:14:46.650 ], 00:14:46.650 "dhchap_dhgroups": [ 00:14:46.650 "null", 00:14:46.650 "ffdhe2048", 00:14:46.650 "ffdhe3072", 00:14:46.650 "ffdhe4096", 00:14:46.650 "ffdhe6144", 00:14:46.650 "ffdhe8192" 00:14:46.650 ] 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "nvmf_set_max_subsystems", 00:14:46.650 "params": { 00:14:46.650 "max_subsystems": 1024 00:14:46.650 } 00:14:46.650 }, 00:14:46.650 { 00:14:46.650 "method": "nvmf_set_crdt", 00:14:46.650 "params": { 00:14:46.650 "crdt1": 0, 00:14:46.650 "crdt2": 0, 00:14:46.650 "crdt3": 0 00:14:46.650 } 00:14:46.650 } 00:14:46.650 ] 00:14:46.650 }, 00:14:46.650 { 00:14:46.651 "subsystem": "iscsi", 00:14:46.651 "config": [ 00:14:46.651 { 00:14:46.651 "method": "iscsi_set_options", 00:14:46.651 "params": { 00:14:46.651 "node_base": "iqn.2016-06.io.spdk", 00:14:46.651 "max_sessions": 128, 00:14:46.651 "max_connections_per_session": 2, 00:14:46.651 "max_queue_depth": 64, 00:14:46.651 "default_time2wait": 2, 00:14:46.651 "default_time2retain": 20, 00:14:46.651 "first_burst_length": 8192, 00:14:46.651 "immediate_data": true, 00:14:46.651 "allow_duplicated_isid": false, 00:14:46.651 "error_recovery_level": 0, 00:14:46.651 "nop_timeout": 60, 00:14:46.651 "nop_in_interval": 30, 00:14:46.651 "disable_chap": false, 00:14:46.651 "require_chap": false, 00:14:46.651 "mutual_chap": false, 00:14:46.651 "chap_group": 0, 00:14:46.651 "max_large_datain_per_connection": 64, 00:14:46.651 "max_r2t_per_connection": 4, 00:14:46.651 "pdu_pool_size": 36864, 00:14:46.651 "immediate_data_pool_size": 16384, 00:14:46.651 "data_out_pool_size": 2048 00:14:46.651 } 00:14:46.651 } 00:14:46.651 ] 00:14:46.651 } 00:14:46.651 ] 00:14:46.651 }' 00:14:46.651 09:27:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:46.651 [2024-11-20 09:27:11.921962] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:46.651 [2024-11-20 09:27:11.922087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71079 ] 00:14:46.651 [2024-11-20 09:27:12.077879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.911 [2024-11-20 09:27:12.177854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.853 [2024-11-20 09:27:12.947320] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:47.853 [2024-11-20 09:27:12.948155] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:47.853 [2024-11-20 09:27:12.955446] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:47.853 [2024-11-20 09:27:12.955522] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:47.853 [2024-11-20 09:27:12.955532] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:47.853 [2024-11-20 09:27:12.955538] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:47.853 [2024-11-20 09:27:12.964402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:47.853 [2024-11-20 09:27:12.964424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:47.853 [2024-11-20 09:27:12.971330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:47.853 [2024-11-20 09:27:12.971427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:47.853 [2024-11-20 09:27:12.988322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71079 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 71079 ']' 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 71079 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71079 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.853 killing process with pid 71079 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71079' 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 71079 00:14:47.853 09:27:13 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 71079 00:14:48.794 [2024-11-20 09:27:14.243565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:49.054 [2024-11-20 09:27:14.281415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:49.054 [2024-11-20 09:27:14.281561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:49.054 [2024-11-20 09:27:14.291340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:49.054 [2024-11-20 09:27:14.291403] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:49.054 [2024-11-20 09:27:14.291411] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:49.054 [2024-11-20 09:27:14.291439] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:49.054 [2024-11-20 09:27:14.291590] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:50.965 09:27:15 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:50.965 00:14:50.965 real 0m16.511s 00:14:50.965 user 0m4.600s 00:14:50.965 sys 0m2.897s 00:14:50.965 09:27:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.965 09:27:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:50.965 ************************************ 00:14:50.965 END TEST test_save_ublk_config 00:14:50.965 ************************************ 00:14:50.965 09:27:16 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71157 00:14:50.965 09:27:16 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.965 09:27:16 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71157 00:14:50.965 09:27:16 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:50.965 09:27:16 ublk -- common/autotest_common.sh@835 -- # '[' -z 71157 ']' 00:14:50.965 09:27:16 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.965 09:27:16 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.965 09:27:16 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.965 09:27:16 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.965 09:27:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:50.965 [2024-11-20 09:27:16.098029] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:50.965 [2024-11-20 09:27:16.098157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71157 ] 00:14:50.965 [2024-11-20 09:27:16.258164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.965 [2024-11-20 09:27:16.362101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.965 [2024-11-20 09:27:16.362346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.535 09:27:16 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.535 09:27:16 ublk -- common/autotest_common.sh@868 -- # return 0 00:14:51.535 09:27:16 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:51.535 09:27:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.535 09:27:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.535 09:27:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.535 ************************************ 00:14:51.535 START TEST test_create_ublk 00:14:51.535 ************************************ 00:14:51.535 09:27:16 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:14:51.535 09:27:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:51.535 09:27:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.535 09:27:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.535 [2024-11-20 09:27:16.970324] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:51.535 [2024-11-20 09:27:16.972231] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:51.535 09:27:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.535 09:27:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:51.535 09:27:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:51.535 09:27:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.535 09:27:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.795 [2024-11-20 09:27:17.162452] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:51.795 [2024-11-20 09:27:17.162816] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:51.795 [2024-11-20 09:27:17.162832] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:51.795 [2024-11-20 09:27:17.162839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:51.795 [2024-11-20 09:27:17.171507] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:51.795 [2024-11-20 09:27:17.171528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:51.795 [2024-11-20 09:27:17.178353] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:51.795 [2024-11-20 09:27:17.189380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:51.795 [2024-11-20 09:27:17.205334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.795 09:27:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:51.795 { 00:14:51.795 "ublk_device": "/dev/ublkb0", 00:14:51.795 "id": 0, 00:14:51.795 "queue_depth": 512, 00:14:51.795 "num_queues": 4, 00:14:51.795 "bdev_name": "Malloc0" 00:14:51.795 } 00:14:51.795 ]' 00:14:51.795 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:52.053 09:27:17 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:52.053 09:27:17 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:52.053 fio: verification read phase will never start because write phase uses all of runtime 00:14:52.053 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:52.053 fio-3.35 00:14:52.053 Starting 1 process 00:15:02.186 00:15:02.186 fio_test: (groupid=0, jobs=1): err= 0: pid=71196: Wed Nov 20 09:27:27 2024 00:15:02.186 write: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(732MiB/10001msec); 0 zone resets 00:15:02.186 clat (usec): min=35, max=3836, avg=52.55, stdev=83.84 00:15:02.186 lat (usec): min=35, max=3852, avg=53.03, stdev=83.86 00:15:02.186 clat percentiles (usec): 00:15:02.186 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 44], 00:15:02.186 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 50], 00:15:02.186 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 59], 95.00th=[ 63], 00:15:02.186 | 99.00th=[ 74], 99.50th=[ 85], 99.90th=[ 1500], 99.95th=[ 2507], 00:15:02.186 | 99.99th=[ 3523] 00:15:02.186 bw ( KiB/s): min=64688, max=81328, per=100.00%, avg=74923.63, stdev=5339.47, samples=19 00:15:02.186 iops : min=16172, max=20332, avg=18730.89, stdev=1334.86, samples=19 00:15:02.186 lat (usec) : 50=62.59%, 100=37.06%, 250=0.17%, 500=0.04%, 750=0.01% 00:15:02.186 lat (usec) : 1000=0.01% 00:15:02.186 lat (msec) : 2=0.05%, 4=0.07% 00:15:02.186 cpu : usr=3.44%, sys=15.29%, ctx=187262, majf=0, minf=796 00:15:02.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:02.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.186 issued rwts: total=0,187271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:02.186 00:15:02.186 Run status group 0 (all jobs): 00:15:02.186 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=732MiB (767MB), run=10001-10001msec 00:15:02.186 00:15:02.186 Disk stats (read/write): 00:15:02.186 ublkb0: ios=0/185442, merge=0/0, ticks=0/8132, in_queue=8132, util=99.09% 00:15:02.186 09:27:27 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:02.186 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.186 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.186 [2024-11-20 09:27:27.632485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:02.443 [2024-11-20 09:27:27.670792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:02.443 [2024-11-20 09:27:27.671737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:02.443 [2024-11-20 09:27:27.680360] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:02.443 [2024-11-20 09:27:27.680592] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:02.443 [2024-11-20 09:27:27.680605] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.443 09:27:27 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.443 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.443 [2024-11-20 09:27:27.695392] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:02.443 request: 00:15:02.443 { 00:15:02.443 "ublk_id": 0, 00:15:02.443 "method": "ublk_stop_disk", 00:15:02.444 "req_id": 1 00:15:02.444 } 00:15:02.444 Got JSON-RPC error response 00:15:02.444 response: 00:15:02.444 { 00:15:02.444 "code": -19, 00:15:02.444 "message": "No such device" 00:15:02.444 } 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.444 09:27:27 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.444 [2024-11-20 09:27:27.705401] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:02.444 [2024-11-20 09:27:27.711313] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:02.444 [2024-11-20 09:27:27.711353] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.444 09:27:27 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.444 09:27:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.701 09:27:28 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.701 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:02.701 09:27:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:02.961 09:27:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:02.961 00:15:02.961 real 0m11.215s 00:15:02.961 user 0m0.649s 00:15:02.961 sys 0m1.612s 00:15:02.961 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.961 ************************************ 00:15:02.961 END TEST test_create_ublk 00:15:02.961 09:27:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.961 ************************************ 00:15:02.961 09:27:28 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:02.961 09:27:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.961 09:27:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.961 09:27:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.961 ************************************ 00:15:02.961 START TEST test_create_multi_ublk 00:15:02.961 ************************************ 00:15:02.961 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:15:02.961 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:02.961 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.961 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.961 [2024-11-20 09:27:28.225315] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:02.961 [2024-11-20 09:27:28.226877] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:02.961 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.961 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:02.962 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:02.962 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:02.962 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:02.962 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.962 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.220 [2024-11-20 09:27:28.453429] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:03.220 [2024-11-20 09:27:28.453741] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:03.220 [2024-11-20 09:27:28.453754] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:03.220 [2024-11-20 09:27:28.453762] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.220 [2024-11-20 09:27:28.477320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.220 [2024-11-20 09:27:28.477343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.220 [2024-11-20 09:27:28.489323] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.220 [2024-11-20 09:27:28.489853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:03.220 [2024-11-20 09:27:28.508549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.220 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.477 [2024-11-20 09:27:28.732442] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:03.477 [2024-11-20 09:27:28.732749] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:03.477 [2024-11-20 09:27:28.732763] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:03.477 [2024-11-20 09:27:28.732769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.477 [2024-11-20 09:27:28.740338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.477 [2024-11-20 09:27:28.740361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.477 [2024-11-20 09:27:28.748337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.477 [2024-11-20 09:27:28.748862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:03.477 [2024-11-20 09:27:28.753219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.477 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.477 [2024-11-20 09:27:28.920411] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:03.477 [2024-11-20 09:27:28.920722] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:03.477 [2024-11-20 09:27:28.920729] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:03.477 [2024-11-20 09:27:28.920735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.477 [2024-11-20 09:27:28.928331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.477 [2024-11-20 09:27:28.928354] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.734 [2024-11-20 09:27:28.936328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.734 [2024-11-20 09:27:28.936864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:03.734 [2024-11-20 09:27:28.944376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.734 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.734 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:03.734 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:03.734 09:27:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:03.734 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.734 09:27:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.734 [2024-11-20 09:27:29.112426] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:03.734 [2024-11-20 09:27:29.112722] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:03.734 [2024-11-20 09:27:29.112731] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:03.734 [2024-11-20 09:27:29.112736] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.734 [2024-11-20 09:27:29.120331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.734 [2024-11-20 09:27:29.120350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.734 [2024-11-20 09:27:29.128324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.734 [2024-11-20 09:27:29.128838] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:03.734 [2024-11-20 09:27:29.135385] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:03.734 { 00:15:03.734 "ublk_device": "/dev/ublkb0", 00:15:03.734 "id": 0, 00:15:03.734 "queue_depth": 512, 00:15:03.734 "num_queues": 4, 00:15:03.734 "bdev_name": "Malloc0" 00:15:03.734 }, 00:15:03.734 { 00:15:03.734 "ublk_device": "/dev/ublkb1", 00:15:03.734 "id": 1, 00:15:03.734 "queue_depth": 512, 00:15:03.734 "num_queues": 4, 00:15:03.734 "bdev_name": "Malloc1" 00:15:03.734 }, 00:15:03.734 { 00:15:03.734 "ublk_device": "/dev/ublkb2", 00:15:03.734 "id": 2, 00:15:03.734 "queue_depth": 512, 00:15:03.734 "num_queues": 4, 00:15:03.734 "bdev_name": "Malloc2" 00:15:03.734 }, 00:15:03.734 { 00:15:03.734 "ublk_device": "/dev/ublkb3", 00:15:03.734 "id": 3, 00:15:03.734 "queue_depth": 512, 00:15:03.734 "num_queues": 4, 00:15:03.734 "bdev_name": "Malloc3" 00:15:03.734 } 00:15:03.734 ]' 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:03.734 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:03.990 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:04.283 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.558 [2024-11-20 09:27:29.816413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:04.558 [2024-11-20 09:27:29.864803] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:04.558 [2024-11-20 09:27:29.865789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:04.558 [2024-11-20 09:27:29.872327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:04.558 [2024-11-20 09:27:29.872576] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:04.558 [2024-11-20 09:27:29.872590] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.558 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.558 [2024-11-20 09:27:29.888392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:04.558 [2024-11-20 09:27:29.920778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:04.558 [2024-11-20 09:27:29.921757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:04.558 [2024-11-20 09:27:29.928329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:04.558 [2024-11-20 09:27:29.928586] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:04.558 [2024-11-20 09:27:29.928600] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:04.559 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.559 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:04.559 09:27:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:04.559 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.559 09:27:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.559 [2024-11-20 09:27:29.942400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:04.559 [2024-11-20 09:27:29.987788] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:04.559 [2024-11-20 09:27:29.988740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:04.559 [2024-11-20 09:27:29.995326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:04.559 [2024-11-20 09:27:29.995563] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:04.559 [2024-11-20 09:27:29.995577] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:04.559 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.559 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:04.559 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:04.559 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.559 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.559 [2024-11-20 09:27:30.009424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:04.815 [2024-11-20 09:27:30.045783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:04.815 [2024-11-20 09:27:30.046784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:04.815 [2024-11-20 09:27:30.054326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:04.815 [2024-11-20 09:27:30.054617] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:04.815 [2024-11-20 09:27:30.054631] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:04.815 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.815 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:04.815 [2024-11-20 09:27:30.261385] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:04.815 [2024-11-20 09:27:30.264938] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:04.815 [2024-11-20 09:27:30.264973] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:05.072 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:05.072 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:05.072 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:05.072 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.072 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:05.331 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.331 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:05.331 09:27:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:05.331 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.331 09:27:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:05.588 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.589 09:27:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:05.589 09:27:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:05.589 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.589 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:05.846 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.846 09:27:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:05.846 09:27:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:05.846 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.846 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:06.105 ************************************ 00:15:06.105 END TEST test_create_multi_ublk 00:15:06.105 ************************************ 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:06.105 00:15:06.105 real 0m3.265s 00:15:06.105 user 0m0.865s 00:15:06.105 sys 0m0.144s 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.105 09:27:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.105 09:27:31 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:06.105 09:27:31 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:06.105 09:27:31 ublk -- ublk/ublk.sh@130 -- # killprocess 71157 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@954 -- # '[' -z 71157 ']' 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@958 -- # kill -0 71157 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@959 -- # uname 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71157 00:15:06.105 killing process with pid 71157 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71157' 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@973 -- # kill 71157 00:15:06.105 09:27:31 ublk -- common/autotest_common.sh@978 -- # wait 71157 00:15:06.738 [2024-11-20 09:27:32.084009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:06.738 [2024-11-20 09:27:32.084174] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:07.305 00:15:07.305 real 0m33.433s 00:15:07.305 user 0m34.645s 00:15:07.305 sys 0m9.473s 00:15:07.305 09:27:32 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.305 09:27:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 ************************************ 00:15:07.305 END TEST ublk 00:15:07.305 ************************************ 00:15:07.563 09:27:32 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:07.563 09:27:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:07.563 09:27:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.563 09:27:32 -- common/autotest_common.sh@10 -- # set +x 00:15:07.563 ************************************ 00:15:07.563 START TEST ublk_recovery 00:15:07.563 ************************************ 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:07.563 * Looking for test storage... 00:15:07.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.563 09:27:32 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.563 --rc genhtml_branch_coverage=1 00:15:07.563 --rc genhtml_function_coverage=1 00:15:07.563 --rc genhtml_legend=1 00:15:07.563 --rc geninfo_all_blocks=1 00:15:07.563 --rc geninfo_unexecuted_blocks=1 00:15:07.563 00:15:07.563 ' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.563 --rc genhtml_branch_coverage=1 00:15:07.563 --rc genhtml_function_coverage=1 00:15:07.563 --rc genhtml_legend=1 00:15:07.563 --rc geninfo_all_blocks=1 00:15:07.563 --rc geninfo_unexecuted_blocks=1 00:15:07.563 00:15:07.563 ' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.563 --rc genhtml_branch_coverage=1 00:15:07.563 --rc genhtml_function_coverage=1 00:15:07.563 --rc genhtml_legend=1 00:15:07.563 --rc geninfo_all_blocks=1 00:15:07.563 --rc geninfo_unexecuted_blocks=1 00:15:07.563 00:15:07.563 ' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.563 --rc genhtml_branch_coverage=1 00:15:07.563 --rc genhtml_function_coverage=1 00:15:07.563 --rc genhtml_legend=1 00:15:07.563 --rc geninfo_all_blocks=1 00:15:07.563 --rc geninfo_unexecuted_blocks=1 00:15:07.563 00:15:07.563 ' 00:15:07.563 09:27:32 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:07.563 09:27:32 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:07.563 09:27:32 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:07.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.563 09:27:32 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71547 00:15:07.563 09:27:32 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.563 09:27:32 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:07.563 09:27:32 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71547 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71547 ']' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.563 09:27:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.563 [2024-11-20 09:27:33.010524] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:15:07.563 [2024-11-20 09:27:33.011112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71547 ] 00:15:07.822 [2024-11-20 09:27:33.170153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:07.822 [2024-11-20 09:27:33.271217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.822 [2024-11-20 09:27:33.271234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.756 09:27:33 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.756 09:27:33 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:15:08.756 09:27:33 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:08.756 09:27:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 09:27:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 [2024-11-20 09:27:33.868321] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:08.756 [2024-11-20 09:27:33.870228] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:08.756 09:27:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 09:27:33 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:08.756 09:27:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.757 09:27:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.757 malloc0 00:15:08.757 09:27:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.757 09:27:33 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:08.757 09:27:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.757 09:27:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.757 [2024-11-20 09:27:33.972455] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:08.757 [2024-11-20 09:27:33.972552] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:08.757 [2024-11-20 09:27:33.972563] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:08.757 [2024-11-20 09:27:33.972576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.757 [2024-11-20 09:27:33.981414] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.757 [2024-11-20 09:27:33.981436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.757 [2024-11-20 09:27:33.988329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.757 [2024-11-20 09:27:33.988487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:08.757 [2024-11-20 09:27:34.011333] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.757 1 00:15:08.757 09:27:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.757 09:27:34 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:09.696 09:27:35 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71582 00:15:09.696 09:27:35 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:09.696 09:27:35 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:09.696 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.696 fio-3.35 00:15:09.696 Starting 1 process 00:15:14.971 09:27:40 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71547 00:15:14.971 09:27:40 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:20.238 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71547 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:20.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.238 09:27:45 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71691 00:15:20.238 09:27:45 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:20.238 09:27:45 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71691 00:15:20.238 09:27:45 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:20.238 09:27:45 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71691 ']' 00:15:20.238 09:27:45 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.238 09:27:45 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.238 09:27:45 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.238 09:27:45 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.238 09:27:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.238 [2024-11-20 09:27:45.109798] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:15:20.238 [2024-11-20 09:27:45.109907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71691 ] 00:15:20.238 [2024-11-20 09:27:45.266093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:20.238 [2024-11-20 09:27:45.352629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.238 [2024-11-20 09:27:45.352823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:15:20.808 09:27:45 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.808 [2024-11-20 09:27:45.965323] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:20.808 [2024-11-20 09:27:45.966995] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.808 09:27:45 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.808 09:27:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.808 malloc0 00:15:20.808 09:27:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.808 09:27:46 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:20.808 09:27:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.808 09:27:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.808 [2024-11-20 09:27:46.053436] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:20.808 [2024-11-20 09:27:46.053476] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:20.808 [2024-11-20 09:27:46.053484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:20.808 [2024-11-20 09:27:46.061365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:20.808 [2024-11-20 09:27:46.061402] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:20.808 1 00:15:20.808 09:27:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.808 09:27:46 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71582 00:15:21.749 [2024-11-20 09:27:47.061444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:21.750 [2024-11-20 09:27:47.067326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:21.750 [2024-11-20 09:27:47.067346] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:22.683 [2024-11-20 09:27:48.067384] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:22.683 [2024-11-20 09:27:48.071327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:22.683 [2024-11-20 09:27:48.071344] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:24.055 [2024-11-20 09:27:49.071370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:24.055 [2024-11-20 09:27:49.083330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:24.055 [2024-11-20 09:27:49.083347] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:24.055 [2024-11-20 09:27:49.083356] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:24.055 [2024-11-20 09:27:49.083434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:45.963 [2024-11-20 09:28:10.551332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:45.963 [2024-11-20 09:28:10.554827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:45.963 [2024-11-20 09:28:10.566478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:45.963 [2024-11-20 09:28:10.566561] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:12.531 00:16:12.531 fio_test: (groupid=0, jobs=1): err= 0: pid=71585: Wed Nov 20 09:28:35 2024 00:16:12.531 read: IOPS=14.3k, BW=55.9MiB/s (58.6MB/s)(3352MiB/60002msec) 00:16:12.531 slat (nsec): min=958, max=1161.6k, avg=5116.06, stdev=2289.41 00:16:12.531 clat (usec): min=907, max=30550k, avg=4505.66, stdev=267895.15 00:16:12.531 lat (usec): min=920, max=30550k, avg=4510.78, stdev=267895.15 00:16:12.531 clat percentiles (usec): 00:16:12.531 | 1.00th=[ 1680], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1893], 00:16:12.531 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1975], 60.00th=[ 2008], 00:16:12.531 | 70.00th=[ 2073], 80.00th=[ 2343], 90.00th=[ 2442], 95.00th=[ 3130], 00:16:12.531 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 7570], 99.95th=[ 8848], 00:16:12.531 | 99.99th=[12911] 00:16:12.531 bw ( KiB/s): min=21048, max=127688, per=100.00%, avg=114459.39, stdev=17986.94, samples=59 00:16:12.531 iops : min= 5262, max=31922, avg=28614.85, stdev=4496.74, samples=59 00:16:12.531 write: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(3347MiB/60002msec); 0 zone resets 00:16:12.531 slat (nsec): min=942, max=299981, avg=5151.41, stdev=2028.41 00:16:12.531 clat (usec): min=959, max=30550k, avg=4439.63, stdev=259830.73 00:16:12.531 lat (usec): min=968, max=30550k, avg=4444.78, stdev=259830.73 00:16:12.531 clat percentiles (usec): 00:16:12.531 | 1.00th=[ 1696], 5.00th=[ 1893], 10.00th=[ 1942], 20.00th=[ 1975], 00:16:12.531 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2057], 60.00th=[ 2089], 00:16:12.531 | 70.00th=[ 2147], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 3064], 00:16:12.531 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7242], 99.95th=[ 8848], 00:16:12.531 | 99.99th=[13042] 00:16:12.531 bw ( KiB/s): min=20960, max=128328, per=100.00%, avg=114301.02, stdev=17981.60, samples=59 00:16:12.531 iops : min= 5240, max=32082, avg=28575.25, stdev=4495.40, samples=59 00:16:12.531 lat (usec) : 1000=0.01% 00:16:12.531 lat (msec) : 2=43.52%, 4=53.78%, 10=2.66%, 20=0.04%, >=2000=0.01% 00:16:12.531 cpu : usr=3.56%, sys=15.12%, ctx=60715, majf=0, minf=14 00:16:12.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:12.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.531 issued rwts: total=858074,856883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.531 00:16:12.531 Run status group 0 (all jobs): 00:16:12.531 READ: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=3352MiB (3515MB), run=60002-60002msec 00:16:12.531 WRITE: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=3347MiB (3510MB), run=60002-60002msec 00:16:12.531 00:16:12.531 Disk stats (read/write): 00:16:12.531 ublkb1: ios=854671/853570, merge=0/0, ticks=3807722/3677552, in_queue=7485274, util=99.91% 00:16:12.531 09:28:35 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.531 [2024-11-20 09:28:35.271588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.531 [2024-11-20 09:28:35.303438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.531 [2024-11-20 09:28:35.303588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.531 [2024-11-20 09:28:35.310331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.531 [2024-11-20 09:28:35.310436] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:12.531 [2024-11-20 09:28:35.310443] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.531 09:28:35 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.531 [2024-11-20 09:28:35.324428] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:12.531 [2024-11-20 09:28:35.328134] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:12.531 [2024-11-20 09:28:35.328172] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.531 09:28:35 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:12.531 09:28:35 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:12.531 09:28:35 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71691 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71691 ']' 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71691 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71691 00:16:12.531 killing process with pid 71691 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71691' 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71691 00:16:12.531 09:28:35 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71691 00:16:12.531 [2024-11-20 09:28:36.404253] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:12.531 [2024-11-20 09:28:36.404315] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:12.531 00:16:12.531 real 1m4.327s 00:16:12.531 user 1m47.439s 00:16:12.531 sys 0m21.785s 00:16:12.531 09:28:37 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.531 09:28:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.531 ************************************ 00:16:12.531 END TEST ublk_recovery 00:16:12.531 ************************************ 00:16:12.531 09:28:37 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:16:12.531 09:28:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:12.531 09:28:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.531 09:28:37 -- common/autotest_common.sh@10 -- # set +x 00:16:12.531 09:28:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:16:12.531 09:28:37 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:12.531 09:28:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.531 09:28:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.531 09:28:37 -- common/autotest_common.sh@10 -- # set +x 00:16:12.531 ************************************ 00:16:12.531 START TEST ftl 00:16:12.531 ************************************ 00:16:12.531 09:28:37 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:12.531 * Looking for test storage... 00:16:12.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.531 09:28:37 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:12.531 09:28:37 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:16:12.531 09:28:37 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:12.531 09:28:37 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.531 09:28:37 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.531 09:28:37 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.531 09:28:37 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.531 09:28:37 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.531 09:28:37 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.531 09:28:37 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:12.531 09:28:37 ftl -- scripts/common.sh@345 -- # : 1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.531 09:28:37 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.531 09:28:37 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@353 -- # local d=1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.531 09:28:37 ftl -- scripts/common.sh@355 -- # echo 1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.531 09:28:37 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@353 -- # local d=2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.531 09:28:37 ftl -- scripts/common.sh@355 -- # echo 2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.531 09:28:37 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.531 09:28:37 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.531 09:28:37 ftl -- scripts/common.sh@368 -- # return 0 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:12.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.532 --rc genhtml_branch_coverage=1 00:16:12.532 --rc genhtml_function_coverage=1 00:16:12.532 --rc genhtml_legend=1 00:16:12.532 --rc geninfo_all_blocks=1 00:16:12.532 --rc geninfo_unexecuted_blocks=1 00:16:12.532 00:16:12.532 ' 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:12.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.532 --rc genhtml_branch_coverage=1 00:16:12.532 --rc genhtml_function_coverage=1 00:16:12.532 --rc genhtml_legend=1 00:16:12.532 --rc geninfo_all_blocks=1 00:16:12.532 --rc geninfo_unexecuted_blocks=1 00:16:12.532 00:16:12.532 ' 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:12.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.532 --rc genhtml_branch_coverage=1 00:16:12.532 --rc genhtml_function_coverage=1 00:16:12.532 --rc genhtml_legend=1 00:16:12.532 --rc geninfo_all_blocks=1 00:16:12.532 --rc geninfo_unexecuted_blocks=1 00:16:12.532 00:16:12.532 ' 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:12.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.532 --rc genhtml_branch_coverage=1 00:16:12.532 --rc genhtml_function_coverage=1 00:16:12.532 --rc genhtml_legend=1 00:16:12.532 --rc geninfo_all_blocks=1 00:16:12.532 --rc geninfo_unexecuted_blocks=1 00:16:12.532 00:16:12.532 ' 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:12.532 09:28:37 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:12.532 09:28:37 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.532 09:28:37 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.532 09:28:37 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:12.532 09:28:37 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:12.532 09:28:37 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.532 09:28:37 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:12.532 09:28:37 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:12.532 09:28:37 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.532 09:28:37 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.532 09:28:37 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:12.532 09:28:37 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:12.532 09:28:37 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.532 09:28:37 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.532 09:28:37 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:12.532 09:28:37 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:12.532 09:28:37 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.532 09:28:37 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.532 09:28:37 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:12.532 09:28:37 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:12.532 09:28:37 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.532 09:28:37 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.532 09:28:37 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.532 09:28:37 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.532 09:28:37 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:12.532 09:28:37 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:12.532 09:28:37 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.532 09:28:37 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:12.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:12.532 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:12.532 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:12.532 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:12.532 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72496 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:12.532 09:28:37 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72496 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 72496 ']' 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.532 09:28:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:12.532 [2024-11-20 09:28:37.853373] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:16:12.532 [2024-11-20 09:28:37.853694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:16:12.789 [2024-11-20 09:28:38.018461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.789 [2024-11-20 09:28:38.106673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.354 09:28:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.354 09:28:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:16:13.354 09:28:38 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:13.611 09:28:38 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:14.542 09:28:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:14.542 09:28:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:14.799 09:28:40 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:14.799 09:28:40 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:14.799 09:28:40 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@50 -- # break 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@63 -- # break 00:16:15.056 09:28:40 ftl -- ftl/ftl.sh@66 -- # killprocess 72496 00:16:15.056 09:28:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 72496 ']' 00:16:15.056 09:28:40 ftl -- common/autotest_common.sh@958 -- # kill -0 72496 00:16:15.056 09:28:40 ftl -- common/autotest_common.sh@959 -- # uname 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72496 00:16:15.313 killing process with pid 72496 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72496' 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@973 -- # kill 72496 00:16:15.313 09:28:40 ftl -- common/autotest_common.sh@978 -- # wait 72496 00:16:16.730 09:28:42 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:16.730 09:28:42 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:16.730 09:28:42 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:16.730 09:28:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.730 09:28:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:16.730 ************************************ 00:16:16.730 START TEST ftl_fio_basic 00:16:16.730 ************************************ 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:16.730 * Looking for test storage... 00:16:16.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:16.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.730 --rc genhtml_branch_coverage=1 00:16:16.730 --rc genhtml_function_coverage=1 00:16:16.730 --rc genhtml_legend=1 00:16:16.730 --rc geninfo_all_blocks=1 00:16:16.730 --rc geninfo_unexecuted_blocks=1 00:16:16.730 00:16:16.730 ' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:16.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.730 --rc genhtml_branch_coverage=1 00:16:16.730 --rc genhtml_function_coverage=1 00:16:16.730 --rc genhtml_legend=1 00:16:16.730 --rc geninfo_all_blocks=1 00:16:16.730 --rc geninfo_unexecuted_blocks=1 00:16:16.730 00:16:16.730 ' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:16.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.730 --rc genhtml_branch_coverage=1 00:16:16.730 --rc genhtml_function_coverage=1 00:16:16.730 --rc genhtml_legend=1 00:16:16.730 --rc geninfo_all_blocks=1 00:16:16.730 --rc geninfo_unexecuted_blocks=1 00:16:16.730 00:16:16.730 ' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:16.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.730 --rc genhtml_branch_coverage=1 00:16:16.730 --rc genhtml_function_coverage=1 00:16:16.730 --rc genhtml_legend=1 00:16:16.730 --rc geninfo_all_blocks=1 00:16:16.730 --rc geninfo_unexecuted_blocks=1 00:16:16.730 00:16:16.730 ' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:16.730 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72634 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72634 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72634 ']' 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.731 09:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.989 [2024-11-20 09:28:42.260453] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:16:16.989 [2024-11-20 09:28:42.260726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:16:16.989 [2024-11-20 09:28:42.423959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.247 [2024-11-20 09:28:42.531627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.247 [2024-11-20 09:28:42.531854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.247 [2024-11-20 09:28:42.532035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:17.812 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:18.069 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:18.326 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:18.326 { 00:16:18.326 "name": "nvme0n1", 00:16:18.326 "aliases": [ 00:16:18.327 "194b871e-7880-4279-8b26-f51a9d231018" 00:16:18.327 ], 00:16:18.327 "product_name": "NVMe disk", 00:16:18.327 "block_size": 4096, 00:16:18.327 "num_blocks": 1310720, 00:16:18.327 "uuid": "194b871e-7880-4279-8b26-f51a9d231018", 00:16:18.327 "numa_id": -1, 00:16:18.327 "assigned_rate_limits": { 00:16:18.327 "rw_ios_per_sec": 0, 00:16:18.327 "rw_mbytes_per_sec": 0, 00:16:18.327 "r_mbytes_per_sec": 0, 00:16:18.327 "w_mbytes_per_sec": 0 00:16:18.327 }, 00:16:18.327 "claimed": false, 00:16:18.327 "zoned": false, 00:16:18.327 "supported_io_types": { 00:16:18.327 "read": true, 00:16:18.327 "write": true, 00:16:18.327 "unmap": true, 00:16:18.327 "flush": true, 00:16:18.327 "reset": true, 00:16:18.327 "nvme_admin": true, 00:16:18.327 "nvme_io": true, 00:16:18.327 "nvme_io_md": false, 00:16:18.327 "write_zeroes": true, 00:16:18.327 "zcopy": false, 00:16:18.327 "get_zone_info": false, 00:16:18.327 "zone_management": false, 00:16:18.327 "zone_append": false, 00:16:18.327 "compare": true, 00:16:18.327 "compare_and_write": false, 00:16:18.327 "abort": true, 00:16:18.327 "seek_hole": false, 00:16:18.327 "seek_data": false, 00:16:18.327 "copy": true, 00:16:18.327 "nvme_iov_md": false 00:16:18.327 }, 00:16:18.327 "driver_specific": { 00:16:18.327 "nvme": [ 00:16:18.327 { 00:16:18.327 "pci_address": "0000:00:11.0", 00:16:18.327 "trid": { 00:16:18.327 "trtype": "PCIe", 00:16:18.327 "traddr": "0000:00:11.0" 00:16:18.327 }, 00:16:18.327 "ctrlr_data": { 00:16:18.327 "cntlid": 0, 00:16:18.327 "vendor_id": "0x1b36", 00:16:18.327 "model_number": "QEMU NVMe Ctrl", 00:16:18.327 "serial_number": "12341", 00:16:18.327 "firmware_revision": "8.0.0", 00:16:18.327 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:18.327 "oacs": { 00:16:18.327 "security": 0, 00:16:18.327 "format": 1, 00:16:18.327 "firmware": 0, 00:16:18.327 "ns_manage": 1 00:16:18.327 }, 00:16:18.327 "multi_ctrlr": false, 00:16:18.327 "ana_reporting": false 00:16:18.327 }, 00:16:18.327 "vs": { 00:16:18.327 "nvme_version": "1.4" 00:16:18.327 }, 00:16:18.327 "ns_data": { 00:16:18.327 "id": 1, 00:16:18.327 "can_share": false 00:16:18.327 } 00:16:18.327 } 00:16:18.327 ], 00:16:18.327 "mp_policy": "active_passive" 00:16:18.327 } 00:16:18.327 } 00:16:18.327 ]' 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:18.327 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:18.584 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:18.584 09:28:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:18.877 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=99f4c114-87a4-4072-94d3-8f3d9aa873c0 00:16:18.877 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 99f4c114-87a4-4072-94d3-8f3d9aa873c0 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:19.151 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.409 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:19.409 { 00:16:19.409 "name": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:19.409 "aliases": [ 00:16:19.409 "lvs/nvme0n1p0" 00:16:19.409 ], 00:16:19.409 "product_name": "Logical Volume", 00:16:19.409 "block_size": 4096, 00:16:19.409 "num_blocks": 26476544, 00:16:19.409 "uuid": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:19.409 "assigned_rate_limits": { 00:16:19.409 "rw_ios_per_sec": 0, 00:16:19.409 "rw_mbytes_per_sec": 0, 00:16:19.409 "r_mbytes_per_sec": 0, 00:16:19.409 "w_mbytes_per_sec": 0 00:16:19.409 }, 00:16:19.409 "claimed": false, 00:16:19.409 "zoned": false, 00:16:19.409 "supported_io_types": { 00:16:19.409 "read": true, 00:16:19.409 "write": true, 00:16:19.409 "unmap": true, 00:16:19.409 "flush": false, 00:16:19.409 "reset": true, 00:16:19.409 "nvme_admin": false, 00:16:19.409 "nvme_io": false, 00:16:19.409 "nvme_io_md": false, 00:16:19.409 "write_zeroes": true, 00:16:19.409 "zcopy": false, 00:16:19.409 "get_zone_info": false, 00:16:19.409 "zone_management": false, 00:16:19.409 "zone_append": false, 00:16:19.409 "compare": false, 00:16:19.409 "compare_and_write": false, 00:16:19.409 "abort": false, 00:16:19.409 "seek_hole": true, 00:16:19.409 "seek_data": true, 00:16:19.409 "copy": false, 00:16:19.409 "nvme_iov_md": false 00:16:19.409 }, 00:16:19.409 "driver_specific": { 00:16:19.409 "lvol": { 00:16:19.409 "lvol_store_uuid": "99f4c114-87a4-4072-94d3-8f3d9aa873c0", 00:16:19.409 "base_bdev": "nvme0n1", 00:16:19.409 "thin_provision": true, 00:16:19.409 "num_allocated_clusters": 0, 00:16:19.409 "snapshot": false, 00:16:19.409 "clone": false, 00:16:19.409 "esnap_clone": false 00:16:19.410 } 00:16:19.410 } 00:16:19.410 } 00:16:19.410 ]' 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:19.410 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:19.669 09:28:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 762026ba-20bf-457f-b440-8aaf5834222a 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:19.929 { 00:16:19.929 "name": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:19.929 "aliases": [ 00:16:19.929 "lvs/nvme0n1p0" 00:16:19.929 ], 00:16:19.929 "product_name": "Logical Volume", 00:16:19.929 "block_size": 4096, 00:16:19.929 "num_blocks": 26476544, 00:16:19.929 "uuid": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:19.929 "assigned_rate_limits": { 00:16:19.929 "rw_ios_per_sec": 0, 00:16:19.929 "rw_mbytes_per_sec": 0, 00:16:19.929 "r_mbytes_per_sec": 0, 00:16:19.929 "w_mbytes_per_sec": 0 00:16:19.929 }, 00:16:19.929 "claimed": false, 00:16:19.929 "zoned": false, 00:16:19.929 "supported_io_types": { 00:16:19.929 "read": true, 00:16:19.929 "write": true, 00:16:19.929 "unmap": true, 00:16:19.929 "flush": false, 00:16:19.929 "reset": true, 00:16:19.929 "nvme_admin": false, 00:16:19.929 "nvme_io": false, 00:16:19.929 "nvme_io_md": false, 00:16:19.929 "write_zeroes": true, 00:16:19.929 "zcopy": false, 00:16:19.929 "get_zone_info": false, 00:16:19.929 "zone_management": false, 00:16:19.929 "zone_append": false, 00:16:19.929 "compare": false, 00:16:19.929 "compare_and_write": false, 00:16:19.929 "abort": false, 00:16:19.929 "seek_hole": true, 00:16:19.929 "seek_data": true, 00:16:19.929 "copy": false, 00:16:19.929 "nvme_iov_md": false 00:16:19.929 }, 00:16:19.929 "driver_specific": { 00:16:19.929 "lvol": { 00:16:19.929 "lvol_store_uuid": "99f4c114-87a4-4072-94d3-8f3d9aa873c0", 00:16:19.929 "base_bdev": "nvme0n1", 00:16:19.929 "thin_provision": true, 00:16:19.929 "num_allocated_clusters": 0, 00:16:19.929 "snapshot": false, 00:16:19.929 "clone": false, 00:16:19.929 "esnap_clone": false 00:16:19.929 } 00:16:19.929 } 00:16:19.929 } 00:16:19.929 ]' 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:19.929 09:28:45 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:20.188 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 762026ba-20bf-457f-b440-8aaf5834222a 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=762026ba-20bf-457f-b440-8aaf5834222a 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:20.188 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 762026ba-20bf-457f-b440-8aaf5834222a 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:20.446 { 00:16:20.446 "name": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:20.446 "aliases": [ 00:16:20.446 "lvs/nvme0n1p0" 00:16:20.446 ], 00:16:20.446 "product_name": "Logical Volume", 00:16:20.446 "block_size": 4096, 00:16:20.446 "num_blocks": 26476544, 00:16:20.446 "uuid": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:20.446 "assigned_rate_limits": { 00:16:20.446 "rw_ios_per_sec": 0, 00:16:20.446 "rw_mbytes_per_sec": 0, 00:16:20.446 "r_mbytes_per_sec": 0, 00:16:20.446 "w_mbytes_per_sec": 0 00:16:20.446 }, 00:16:20.446 "claimed": false, 00:16:20.446 "zoned": false, 00:16:20.446 "supported_io_types": { 00:16:20.446 "read": true, 00:16:20.446 "write": true, 00:16:20.446 "unmap": true, 00:16:20.446 "flush": false, 00:16:20.446 "reset": true, 00:16:20.446 "nvme_admin": false, 00:16:20.446 "nvme_io": false, 00:16:20.446 "nvme_io_md": false, 00:16:20.446 "write_zeroes": true, 00:16:20.446 "zcopy": false, 00:16:20.446 "get_zone_info": false, 00:16:20.446 "zone_management": false, 00:16:20.446 "zone_append": false, 00:16:20.446 "compare": false, 00:16:20.446 "compare_and_write": false, 00:16:20.446 "abort": false, 00:16:20.446 "seek_hole": true, 00:16:20.446 "seek_data": true, 00:16:20.446 "copy": false, 00:16:20.446 "nvme_iov_md": false 00:16:20.446 }, 00:16:20.446 "driver_specific": { 00:16:20.446 "lvol": { 00:16:20.446 "lvol_store_uuid": "99f4c114-87a4-4072-94d3-8f3d9aa873c0", 00:16:20.446 "base_bdev": "nvme0n1", 00:16:20.446 "thin_provision": true, 00:16:20.446 "num_allocated_clusters": 0, 00:16:20.446 "snapshot": false, 00:16:20.446 "clone": false, 00:16:20.446 "esnap_clone": false 00:16:20.446 } 00:16:20.446 } 00:16:20.446 } 00:16:20.446 ]' 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:20.446 09:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 762026ba-20bf-457f-b440-8aaf5834222a -c nvc0n1p0 --l2p_dram_limit 60 00:16:20.705 [2024-11-20 09:28:45.913426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.913640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:20.705 [2024-11-20 09:28:45.913664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:20.705 [2024-11-20 09:28:45.913673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.913739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.913751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:20.705 [2024-11-20 09:28:45.913761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:20.705 [2024-11-20 09:28:45.913768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.913808] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:20.705 [2024-11-20 09:28:45.914581] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:20.705 [2024-11-20 09:28:45.914603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.914610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:20.705 [2024-11-20 09:28:45.914620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:16:20.705 [2024-11-20 09:28:45.914627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.914664] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d48a70f7-fb74-4d31-8ef7-ce49bfc103f6 00:16:20.705 [2024-11-20 09:28:45.915752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.915787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:20.705 [2024-11-20 09:28:45.915797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:20.705 [2024-11-20 09:28:45.915806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.921077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.921198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:20.705 [2024-11-20 09:28:45.921212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:16:20.705 [2024-11-20 09:28:45.921221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.921364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.921377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:20.705 [2024-11-20 09:28:45.921385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:20.705 [2024-11-20 09:28:45.921398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.921452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.921463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:20.705 [2024-11-20 09:28:45.921471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:20.705 [2024-11-20 09:28:45.921480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.921508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:20.705 [2024-11-20 09:28:45.925057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.925088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:20.705 [2024-11-20 09:28:45.925099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.551 ms 00:16:20.705 [2024-11-20 09:28:45.925109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.925148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.925157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:20.705 [2024-11-20 09:28:45.925166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:20.705 [2024-11-20 09:28:45.925173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.925196] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:20.705 [2024-11-20 09:28:45.925355] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:20.705 [2024-11-20 09:28:45.925371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:20.705 [2024-11-20 09:28:45.925382] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:20.705 [2024-11-20 09:28:45.925393] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:20.705 [2024-11-20 09:28:45.925402] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:20.705 [2024-11-20 09:28:45.925411] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:20.705 [2024-11-20 09:28:45.925419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:20.705 [2024-11-20 09:28:45.925427] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:20.705 [2024-11-20 09:28:45.925434] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:20.705 [2024-11-20 09:28:45.925443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.925453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:20.705 [2024-11-20 09:28:45.925462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:16:20.705 [2024-11-20 09:28:45.925469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.925558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.705 [2024-11-20 09:28:45.925567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:20.705 [2024-11-20 09:28:45.925576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:20.705 [2024-11-20 09:28:45.925582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.705 [2024-11-20 09:28:45.925718] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:20.705 [2024-11-20 09:28:45.925732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:20.705 [2024-11-20 09:28:45.925745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.705 [2024-11-20 09:28:45.925753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.705 [2024-11-20 09:28:45.925763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:20.705 [2024-11-20 09:28:45.925769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:20.706 [2024-11-20 09:28:45.925784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:20.706 [2024-11-20 09:28:45.925793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.706 [2024-11-20 09:28:45.925807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:20.706 [2024-11-20 09:28:45.925814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:20.706 [2024-11-20 09:28:45.925822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.706 [2024-11-20 09:28:45.925829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:20.706 [2024-11-20 09:28:45.925841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:20.706 [2024-11-20 09:28:45.925847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:20.706 [2024-11-20 09:28:45.925867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:20.706 [2024-11-20 09:28:45.925875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:20.706 [2024-11-20 09:28:45.925890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.706 [2024-11-20 09:28:45.925904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:20.706 [2024-11-20 09:28:45.925910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.706 [2024-11-20 09:28:45.925925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:20.706 [2024-11-20 09:28:45.925932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.706 [2024-11-20 09:28:45.925947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:20.706 [2024-11-20 09:28:45.925953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.706 [2024-11-20 09:28:45.925968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:20.706 [2024-11-20 09:28:45.925977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:20.706 [2024-11-20 09:28:45.925984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.706 [2024-11-20 09:28:45.925991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:20.706 [2024-11-20 09:28:45.926011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:20.706 [2024-11-20 09:28:45.926019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.706 [2024-11-20 09:28:45.926026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:20.706 [2024-11-20 09:28:45.926034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:20.706 [2024-11-20 09:28:45.926040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.706 [2024-11-20 09:28:45.926050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:20.706 [2024-11-20 09:28:45.926056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:20.706 [2024-11-20 09:28:45.926064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.706 [2024-11-20 09:28:45.926070] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:20.706 [2024-11-20 09:28:45.926079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:20.706 [2024-11-20 09:28:45.926086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.706 [2024-11-20 09:28:45.926095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.706 [2024-11-20 09:28:45.926103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:20.706 [2024-11-20 09:28:45.926112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:20.706 [2024-11-20 09:28:45.926120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:20.706 [2024-11-20 09:28:45.926129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:20.706 [2024-11-20 09:28:45.926135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:20.706 [2024-11-20 09:28:45.926143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:20.706 [2024-11-20 09:28:45.926153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:20.706 [2024-11-20 09:28:45.926164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:20.706 [2024-11-20 09:28:45.926181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:20.706 [2024-11-20 09:28:45.926189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:20.706 [2024-11-20 09:28:45.926197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:20.706 [2024-11-20 09:28:45.926204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:20.706 [2024-11-20 09:28:45.926213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:20.706 [2024-11-20 09:28:45.926219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:20.706 [2024-11-20 09:28:45.926228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:20.706 [2024-11-20 09:28:45.926235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:20.706 [2024-11-20 09:28:45.926246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:20.706 [2024-11-20 09:28:45.926286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:20.706 [2024-11-20 09:28:45.926295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:20.706 [2024-11-20 09:28:45.926323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:20.706 [2024-11-20 09:28:45.926330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:20.706 [2024-11-20 09:28:45.926339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:20.706 [2024-11-20 09:28:45.926346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.706 [2024-11-20 09:28:45.926355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:20.706 [2024-11-20 09:28:45.926363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:16:20.706 [2024-11-20 09:28:45.926372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.706 [2024-11-20 09:28:45.926455] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:20.706 [2024-11-20 09:28:45.926469] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:23.987 [2024-11-20 09:28:49.167795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.168020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:23.987 [2024-11-20 09:28:49.168043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3241.327 ms 00:16:23.987 [2024-11-20 09:28:49.168054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.193612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.193662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:23.987 [2024-11-20 09:28:49.193676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.350 ms 00:16:23.987 [2024-11-20 09:28:49.193686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.193820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.193832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:23.987 [2024-11-20 09:28:49.193840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:23.987 [2024-11-20 09:28:49.193851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.237239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.237475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:23.987 [2024-11-20 09:28:49.237504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.333 ms 00:16:23.987 [2024-11-20 09:28:49.237521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.237580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.237595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:23.987 [2024-11-20 09:28:49.237607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:23.987 [2024-11-20 09:28:49.237620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.238044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.238068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:23.987 [2024-11-20 09:28:49.238080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:16:23.987 [2024-11-20 09:28:49.238096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.238277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.238291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:23.987 [2024-11-20 09:28:49.238320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:23.987 [2024-11-20 09:28:49.238335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.254396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.254488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:23.987 [2024-11-20 09:28:49.254509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.021 ms 00:16:23.987 [2024-11-20 09:28:49.254525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.266034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:23.987 [2024-11-20 09:28:49.280638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.280813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:23.987 [2024-11-20 09:28:49.280834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.970 ms 00:16:23.987 [2024-11-20 09:28:49.280845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.367387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.367599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:23.987 [2024-11-20 09:28:49.367632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.492 ms 00:16:23.987 [2024-11-20 09:28:49.367644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.367888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.367911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:23.987 [2024-11-20 09:28:49.367930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:16:23.987 [2024-11-20 09:28:49.367944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.392019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.392074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:23.987 [2024-11-20 09:28:49.392095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.985 ms 00:16:23.987 [2024-11-20 09:28:49.392108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.415839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.415889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:23.987 [2024-11-20 09:28:49.415909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.671 ms 00:16:23.987 [2024-11-20 09:28:49.415920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.987 [2024-11-20 09:28:49.416642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.987 [2024-11-20 09:28:49.416673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:23.987 [2024-11-20 09:28:49.416690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:16:23.987 [2024-11-20 09:28:49.416701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.245 [2024-11-20 09:28:49.490323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.245 [2024-11-20 09:28:49.490626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:24.245 [2024-11-20 09:28:49.490658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.554 ms 00:16:24.245 [2024-11-20 09:28:49.490673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.245 [2024-11-20 09:28:49.515308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.245 [2024-11-20 09:28:49.515357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:24.245 [2024-11-20 09:28:49.515378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.513 ms 00:16:24.245 [2024-11-20 09:28:49.515390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.245 [2024-11-20 09:28:49.539592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.245 [2024-11-20 09:28:49.539640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:24.245 [2024-11-20 09:28:49.539660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.145 ms 00:16:24.245 [2024-11-20 09:28:49.539671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.245 [2024-11-20 09:28:49.563406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.245 [2024-11-20 09:28:49.563454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.245 [2024-11-20 09:28:49.563474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.678 ms 00:16:24.245 [2024-11-20 09:28:49.563485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.245 [2024-11-20 09:28:49.563548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.245 [2024-11-20 09:28:49.563562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.246 [2024-11-20 09:28:49.563580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:24.246 [2024-11-20 09:28:49.563593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.246 [2024-11-20 09:28:49.563714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.246 [2024-11-20 09:28:49.563730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.246 [2024-11-20 09:28:49.563746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:24.246 [2024-11-20 09:28:49.563758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.246 [2024-11-20 09:28:49.564808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3650.956 ms, result 0 00:16:24.246 { 00:16:24.246 "name": "ftl0", 00:16:24.246 "uuid": "d48a70f7-fb74-4d31-8ef7-ce49bfc103f6" 00:16:24.246 } 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.246 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:24.503 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:24.761 [ 00:16:24.761 { 00:16:24.761 "name": "ftl0", 00:16:24.761 "aliases": [ 00:16:24.761 "d48a70f7-fb74-4d31-8ef7-ce49bfc103f6" 00:16:24.761 ], 00:16:24.761 "product_name": "FTL disk", 00:16:24.761 "block_size": 4096, 00:16:24.761 "num_blocks": 20971520, 00:16:24.761 "uuid": "d48a70f7-fb74-4d31-8ef7-ce49bfc103f6", 00:16:24.761 "assigned_rate_limits": { 00:16:24.761 "rw_ios_per_sec": 0, 00:16:24.761 "rw_mbytes_per_sec": 0, 00:16:24.761 "r_mbytes_per_sec": 0, 00:16:24.761 "w_mbytes_per_sec": 0 00:16:24.761 }, 00:16:24.761 "claimed": false, 00:16:24.761 "zoned": false, 00:16:24.761 "supported_io_types": { 00:16:24.761 "read": true, 00:16:24.761 "write": true, 00:16:24.761 "unmap": true, 00:16:24.761 "flush": true, 00:16:24.761 "reset": false, 00:16:24.761 "nvme_admin": false, 00:16:24.761 "nvme_io": false, 00:16:24.761 "nvme_io_md": false, 00:16:24.761 "write_zeroes": true, 00:16:24.761 "zcopy": false, 00:16:24.761 "get_zone_info": false, 00:16:24.761 "zone_management": false, 00:16:24.761 "zone_append": false, 00:16:24.761 "compare": false, 00:16:24.761 "compare_and_write": false, 00:16:24.761 "abort": false, 00:16:24.761 "seek_hole": false, 00:16:24.761 "seek_data": false, 00:16:24.761 "copy": false, 00:16:24.761 "nvme_iov_md": false 00:16:24.761 }, 00:16:24.761 "driver_specific": { 00:16:24.761 "ftl": { 00:16:24.761 "base_bdev": "762026ba-20bf-457f-b440-8aaf5834222a", 00:16:24.761 "cache": "nvc0n1p0" 00:16:24.761 } 00:16:24.761 } 00:16:24.761 } 00:16:24.761 ] 00:16:24.761 09:28:49 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:16:24.761 09:28:49 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:24.761 09:28:49 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:24.761 09:28:50 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:24.761 09:28:50 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:25.019 [2024-11-20 09:28:50.389390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.389587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:25.020 [2024-11-20 09:28:50.389612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:25.020 [2024-11-20 09:28:50.389626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.389674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.020 [2024-11-20 09:28:50.392496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.392531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:25.020 [2024-11-20 09:28:50.392549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:16:25.020 [2024-11-20 09:28:50.392561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.393009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.393036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:25.020 [2024-11-20 09:28:50.393052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:16:25.020 [2024-11-20 09:28:50.393063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.396409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.396439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:25.020 [2024-11-20 09:28:50.396455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:16:25.020 [2024-11-20 09:28:50.396468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.402771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.402803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:25.020 [2024-11-20 09:28:50.402819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:16:25.020 [2024-11-20 09:28:50.402830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.426622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.426660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:25.020 [2024-11-20 09:28:50.426679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.660 ms 00:16:25.020 [2024-11-20 09:28:50.426690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.441351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.441393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:25.020 [2024-11-20 09:28:50.441414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.584 ms 00:16:25.020 [2024-11-20 09:28:50.441428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.441661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.441685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:25.020 [2024-11-20 09:28:50.441703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:16:25.020 [2024-11-20 09:28:50.441714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.020 [2024-11-20 09:28:50.465088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.020 [2024-11-20 09:28:50.465126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:25.020 [2024-11-20 09:28:50.465144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.341 ms 00:16:25.020 [2024-11-20 09:28:50.465155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.278 [2024-11-20 09:28:50.487833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.278 [2024-11-20 09:28:50.487994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:25.278 [2024-11-20 09:28:50.488020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.621 ms 00:16:25.278 [2024-11-20 09:28:50.488030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.278 [2024-11-20 09:28:50.511018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.278 [2024-11-20 09:28:50.511146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:25.278 [2024-11-20 09:28:50.511219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.930 ms 00:16:25.278 [2024-11-20 09:28:50.511252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.278 [2024-11-20 09:28:50.533896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.278 [2024-11-20 09:28:50.534022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:25.278 [2024-11-20 09:28:50.534092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.479 ms 00:16:25.278 [2024-11-20 09:28:50.534125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.278 [2024-11-20 09:28:50.534195] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:25.278 [2024-11-20 09:28:50.534239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:25.278 [2024-11-20 09:28:50.534296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:25.278 [2024-11-20 09:28:50.534437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:25.278 [2024-11-20 09:28:50.534499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:25.278 [2024-11-20 09:28:50.534551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.534605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.534722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.534839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.534901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.535997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.536876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.537957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.538997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:25.279 [2024-11-20 09:28:50.539235] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:25.279 [2024-11-20 09:28:50.539251] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d48a70f7-fb74-4d31-8ef7-ce49bfc103f6 00:16:25.279 [2024-11-20 09:28:50.539264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:25.279 [2024-11-20 09:28:50.539280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:25.279 [2024-11-20 09:28:50.539293] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:25.279 [2024-11-20 09:28:50.539324] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:25.279 [2024-11-20 09:28:50.539336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:25.279 [2024-11-20 09:28:50.539352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:25.279 [2024-11-20 09:28:50.539364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:25.279 [2024-11-20 09:28:50.539378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:25.279 [2024-11-20 09:28:50.539389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:25.279 [2024-11-20 09:28:50.539405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.279 [2024-11-20 09:28:50.539418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:25.279 [2024-11-20 09:28:50.539434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.212 ms 00:16:25.279 [2024-11-20 09:28:50.539447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.552326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.279 [2024-11-20 09:28:50.552364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:25.279 [2024-11-20 09:28:50.552381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.817 ms 00:16:25.279 [2024-11-20 09:28:50.552393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.552859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.279 [2024-11-20 09:28:50.552886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:25.279 [2024-11-20 09:28:50.552902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:16:25.279 [2024-11-20 09:28:50.552913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.596517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.279 [2024-11-20 09:28:50.596575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.279 [2024-11-20 09:28:50.596594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.279 [2024-11-20 09:28:50.596607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.596697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.279 [2024-11-20 09:28:50.596711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.279 [2024-11-20 09:28:50.596725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.279 [2024-11-20 09:28:50.596736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.596864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.279 [2024-11-20 09:28:50.596880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.279 [2024-11-20 09:28:50.596899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.279 [2024-11-20 09:28:50.596912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.596949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.279 [2024-11-20 09:28:50.596962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.279 [2024-11-20 09:28:50.596978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.279 [2024-11-20 09:28:50.596991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.279 [2024-11-20 09:28:50.677310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.279 [2024-11-20 09:28:50.677365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.279 [2024-11-20 09:28:50.677383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.279 [2024-11-20 09:28:50.677394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.739482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.739686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.538 [2024-11-20 09:28:50.739710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.739721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.739840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.739856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.538 [2024-11-20 09:28:50.739871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.739886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.739970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.739985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.538 [2024-11-20 09:28:50.740001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.740014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.740160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.740175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.538 [2024-11-20 09:28:50.740190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.740203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.740269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.740283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.538 [2024-11-20 09:28:50.740298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.740343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.740404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.740419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.538 [2024-11-20 09:28:50.740434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.740446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.740516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.538 [2024-11-20 09:28:50.740532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.538 [2024-11-20 09:28:50.740548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.538 [2024-11-20 09:28:50.740561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.538 [2024-11-20 09:28:50.740751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.315 ms, result 0 00:16:25.538 true 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72634 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72634 ']' 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72634 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72634 00:16:25.538 killing process with pid 72634 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72634' 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72634 00:16:25.538 09:28:50 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72634 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:37.750 09:29:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:37.750 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:37.750 fio-3.35 00:16:37.750 Starting 1 thread 00:16:40.365 00:16:40.365 test: (groupid=0, jobs=1): err= 0: pid=72832: Wed Nov 20 09:29:05 2024 00:16:40.365 read: IOPS=1342, BW=89.2MiB/s (93.5MB/s)(255MiB/2855msec) 00:16:40.365 slat (nsec): min=2913, max=37087, avg=4491.33, stdev=2138.94 00:16:40.365 clat (usec): min=240, max=987, avg=333.65, stdev=44.15 00:16:40.365 lat (usec): min=246, max=998, avg=338.15, stdev=45.04 00:16:40.365 clat percentiles (usec): 00:16:40.365 | 1.00th=[ 262], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 318], 00:16:40.365 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 326], 00:16:40.365 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 420], 00:16:40.365 | 99.00th=[ 506], 99.50th=[ 586], 99.90th=[ 750], 99.95th=[ 947], 00:16:40.365 | 99.99th=[ 988] 00:16:40.365 write: IOPS=1352, BW=89.8MiB/s (94.2MB/s)(256MiB/2851msec); 0 zone resets 00:16:40.365 slat (nsec): min=13880, max=63464, avg=19258.02, stdev=4084.26 00:16:40.365 clat (usec): min=291, max=1001, avg=371.90, stdev=64.81 00:16:40.365 lat (usec): min=315, max=1022, avg=391.16, stdev=65.28 00:16:40.365 clat percentiles (usec): 00:16:40.365 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 343], 00:16:40.365 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:16:40.365 | 70.00th=[ 363], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 478], 00:16:40.365 | 99.00th=[ 685], 99.50th=[ 725], 99.90th=[ 865], 99.95th=[ 988], 00:16:40.365 | 99.99th=[ 1004] 00:16:40.365 bw ( KiB/s): min=88808, max=93432, per=99.90%, avg=91881.60, stdev=1813.45, samples=5 00:16:40.365 iops : min= 1306, max= 1374, avg=1351.20, stdev=26.67, samples=5 00:16:40.365 lat (usec) : 250=0.05%, 500=97.32%, 750=2.43%, 1000=0.18% 00:16:40.365 lat (msec) : 2=0.01% 00:16:40.365 cpu : usr=99.16%, sys=0.18%, ctx=6, majf=0, minf=1170 00:16:40.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.365 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.365 00:16:40.365 Run status group 0 (all jobs): 00:16:40.365 READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=255MiB (267MB), run=2855-2855msec 00:16:40.365 WRITE: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=256MiB (269MB), run=2851-2851msec 00:16:42.265 ----------------------------------------------------- 00:16:42.265 Suppressions used: 00:16:42.265 count bytes template 00:16:42.265 1 5 /usr/src/fio/parse.c 00:16:42.265 1 8 libtcmalloc_minimal.so 00:16:42.265 1 904 libcrypto.so 00:16:42.265 ----------------------------------------------------- 00:16:42.265 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:42.265 09:29:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:42.265 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:42.265 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:42.265 fio-3.35 00:16:42.265 Starting 2 threads 00:17:08.815 00:17:08.815 first_half: (groupid=0, jobs=1): err= 0: pid=72924: Wed Nov 20 09:29:30 2024 00:17:08.815 read: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(255MiB/22170msec) 00:17:08.815 slat (nsec): min=2993, max=26580, avg=3867.02, stdev=785.13 00:17:08.815 clat (usec): min=608, max=245348, avg=33242.71, stdev=18329.73 00:17:08.815 lat (usec): min=614, max=245352, avg=33246.58, stdev=18329.75 00:17:08.815 clat percentiles (msec): 00:17:08.815 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 30], 00:17:08.815 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:17:08.815 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 42], 00:17:08.815 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 201], 99.95th=[ 213], 00:17:08.815 | 99.99th=[ 239] 00:17:08.815 write: IOPS=3477, BW=13.6MiB/s (14.2MB/s)(256MiB/18845msec); 0 zone resets 00:17:08.815 slat (usec): min=3, max=691, avg= 5.75, stdev= 5.42 00:17:08.815 clat (usec): min=367, max=79454, avg=10175.30, stdev=16690.85 00:17:08.815 lat (usec): min=374, max=79458, avg=10181.06, stdev=16690.93 00:17:08.815 clat percentiles (usec): 00:17:08.815 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 955], 20.00th=[ 1287], 00:17:08.815 | 30.00th=[ 2769], 40.00th=[ 4015], 50.00th=[ 4752], 60.00th=[ 5342], 00:17:08.815 | 70.00th=[ 6325], 80.00th=[10159], 90.00th=[28705], 95.00th=[62653], 00:17:08.815 | 99.00th=[69731], 99.50th=[71828], 99.90th=[77071], 99.95th=[78119], 00:17:08.815 | 99.99th=[78119] 00:17:08.815 bw ( KiB/s): min= 960, max=40984, per=75.32%, avg=20955.80, stdev=12244.76, samples=25 00:17:08.815 iops : min= 240, max=10246, avg=5238.92, stdev=3061.14, samples=25 00:17:08.815 lat (usec) : 500=0.03%, 750=1.92%, 1000=3.77% 00:17:08.815 lat (msec) : 2=7.31%, 4=7.21%, 10=21.40%, 20=5.00%, 50=47.55% 00:17:08.815 lat (msec) : 100=4.75%, 250=1.05% 00:17:08.815 cpu : usr=99.45%, sys=0.11%, ctx=53, majf=0, minf=5555 00:17:08.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:08.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.815 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.815 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.815 second_half: (groupid=0, jobs=1): err= 0: pid=72925: Wed Nov 20 09:29:30 2024 00:17:08.815 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(255MiB/22006msec) 00:17:08.815 slat (nsec): min=2958, max=21296, avg=3854.11, stdev=807.75 00:17:08.815 clat (usec): min=692, max=248402, avg=33907.74, stdev=16783.02 00:17:08.815 lat (usec): min=697, max=248406, avg=33911.60, stdev=16783.02 00:17:08.815 clat percentiles (msec): 00:17:08.815 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:17:08.815 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:17:08.815 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 44], 00:17:08.815 | 99.00th=[ 131], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 171], 00:17:08.815 | 99.99th=[ 247] 00:17:08.815 write: IOPS=3864, BW=15.1MiB/s (15.8MB/s)(256MiB/16958msec); 0 zone resets 00:17:08.815 slat (usec): min=3, max=355, avg= 5.72, stdev= 2.78 00:17:08.815 clat (usec): min=365, max=79382, avg=9238.21, stdev=16186.69 00:17:08.815 lat (usec): min=375, max=79387, avg=9243.93, stdev=16186.74 00:17:08.815 clat percentiles (usec): 00:17:08.815 | 1.00th=[ 693], 5.00th=[ 848], 10.00th=[ 979], 20.00th=[ 1172], 00:17:08.815 | 30.00th=[ 1532], 40.00th=[ 3032], 50.00th=[ 4293], 60.00th=[ 5276], 00:17:08.815 | 70.00th=[ 6325], 80.00th=[ 9634], 90.00th=[14222], 95.00th=[62129], 00:17:08.815 | 99.00th=[68682], 99.50th=[71828], 99.90th=[77071], 99.95th=[78119], 00:17:08.815 | 99.99th=[79168] 00:17:08.815 bw ( KiB/s): min= 312, max=49144, per=85.66%, avg=23831.27, stdev=16845.48, samples=22 00:17:08.815 iops : min= 78, max=12286, avg=5957.82, stdev=4211.37, samples=22 00:17:08.815 lat (usec) : 500=0.03%, 750=1.06%, 1000=4.40% 00:17:08.815 lat (msec) : 2=11.32%, 4=7.41%, 10=16.96%, 20=5.57%, 50=47.34% 00:17:08.815 lat (msec) : 100=4.85%, 250=1.07% 00:17:08.815 cpu : usr=99.32%, sys=0.07%, ctx=33, majf=0, minf=5562 00:17:08.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:08.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.815 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.815 issued rwts: total=65184,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.815 00:17:08.815 Run status group 0 (all jobs): 00:17:08.815 READ: bw=23.0MiB/s (24.1MB/s), 11.5MiB/s-11.6MiB/s (12.1MB/s-12.1MB/s), io=509MiB (534MB), run=22006-22170msec 00:17:08.815 WRITE: bw=27.2MiB/s (28.5MB/s), 13.6MiB/s-15.1MiB/s (14.2MB/s-15.8MB/s), io=512MiB (537MB), run=16958-18845msec 00:17:08.815 ----------------------------------------------------- 00:17:08.815 Suppressions used: 00:17:08.815 count bytes template 00:17:08.815 2 10 /usr/src/fio/parse.c 00:17:08.815 2 192 /usr/src/fio/iolog.c 00:17:08.815 1 8 libtcmalloc_minimal.so 00:17:08.815 1 904 libcrypto.so 00:17:08.815 ----------------------------------------------------- 00:17:08.815 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:08.815 09:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:08.815 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:08.815 fio-3.35 00:17:08.815 Starting 1 thread 00:17:21.119 00:17:21.119 test: (groupid=0, jobs=1): err= 0: pid=73225: Wed Nov 20 09:29:45 2024 00:17:21.119 read: IOPS=8083, BW=31.6MiB/s (33.1MB/s)(255MiB/8066msec) 00:17:21.119 slat (nsec): min=3054, max=21349, avg=3547.93, stdev=694.62 00:17:21.119 clat (usec): min=515, max=31155, avg=15827.70, stdev=1735.84 00:17:21.119 lat (usec): min=520, max=31159, avg=15831.24, stdev=1735.86 00:17:21.119 clat percentiles (usec): 00:17:21.119 | 1.00th=[13435], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:17:21.119 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:17:21.119 | 70.00th=[15795], 80.00th=[15926], 90.00th=[17171], 95.00th=[19268], 00:17:21.119 | 99.00th=[23200], 99.50th=[24511], 99.90th=[26870], 99.95th=[27395], 00:17:21.119 | 99.99th=[30540] 00:17:21.119 write: IOPS=16.3k, BW=63.9MiB/s (67.0MB/s)(256MiB/4009msec); 0 zone resets 00:17:21.119 slat (usec): min=4, max=144, avg= 6.58, stdev= 2.56 00:17:21.119 clat (usec): min=469, max=63700, avg=7786.46, stdev=10217.37 00:17:21.119 lat (usec): min=476, max=63705, avg=7793.04, stdev=10217.35 00:17:21.119 clat percentiles (usec): 00:17:21.119 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 873], 20.00th=[ 996], 00:17:21.119 | 30.00th=[ 1123], 40.00th=[ 1598], 50.00th=[ 4752], 60.00th=[ 5407], 00:17:21.119 | 70.00th=[ 6325], 80.00th=[ 8094], 90.00th=[28967], 95.00th=[31065], 00:17:21.119 | 99.00th=[36439], 99.50th=[38536], 99.90th=[45351], 99.95th=[52167], 00:17:21.119 | 99.99th=[61080] 00:17:21.119 bw ( KiB/s): min= 1016, max=93480, per=88.98%, avg=58181.56, stdev=25577.16, samples=9 00:17:21.119 iops : min= 254, max=23370, avg=14545.33, stdev=6394.23, samples=9 00:17:21.119 lat (usec) : 500=0.01%, 750=2.17%, 1000=8.04% 00:17:21.119 lat (msec) : 2=10.32%, 4=1.25%, 10=20.15%, 20=47.97%, 50=10.06% 00:17:21.119 lat (msec) : 100=0.03% 00:17:21.119 cpu : usr=99.06%, sys=0.23%, ctx=28, majf=0, minf=5565 00:17:21.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:21.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.119 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.119 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.119 00:17:21.119 Run status group 0 (all jobs): 00:17:21.119 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=255MiB (267MB), run=8066-8066msec 00:17:21.119 WRITE: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=256MiB (268MB), run=4009-4009msec 00:17:22.491 ----------------------------------------------------- 00:17:22.491 Suppressions used: 00:17:22.491 count bytes template 00:17:22.491 1 5 /usr/src/fio/parse.c 00:17:22.491 2 192 /usr/src/fio/iolog.c 00:17:22.491 1 8 libtcmalloc_minimal.so 00:17:22.491 1 904 libcrypto.so 00:17:22.491 ----------------------------------------------------- 00:17:22.491 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.491 Remove shared memory files 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57153 /dev/shm/spdk_tgt_trace.pid71547 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:22.491 09:29:47 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:22.491 ************************************ 00:17:22.491 END TEST ftl_fio_basic 00:17:22.491 ************************************ 00:17:22.491 00:17:22.491 real 1m5.586s 00:17:22.491 user 2m29.258s 00:17:22.491 sys 0m2.656s 00:17:22.492 09:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.492 09:29:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.492 09:29:47 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:22.492 09:29:47 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:22.492 09:29:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.492 09:29:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:22.492 ************************************ 00:17:22.492 START TEST ftl_bdevperf 00:17:22.492 ************************************ 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:22.492 * Looking for test storage... 00:17:22.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.492 --rc genhtml_branch_coverage=1 00:17:22.492 --rc genhtml_function_coverage=1 00:17:22.492 --rc genhtml_legend=1 00:17:22.492 --rc geninfo_all_blocks=1 00:17:22.492 --rc geninfo_unexecuted_blocks=1 00:17:22.492 00:17:22.492 ' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.492 --rc genhtml_branch_coverage=1 00:17:22.492 --rc genhtml_function_coverage=1 00:17:22.492 --rc genhtml_legend=1 00:17:22.492 --rc geninfo_all_blocks=1 00:17:22.492 --rc geninfo_unexecuted_blocks=1 00:17:22.492 00:17:22.492 ' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.492 --rc genhtml_branch_coverage=1 00:17:22.492 --rc genhtml_function_coverage=1 00:17:22.492 --rc genhtml_legend=1 00:17:22.492 --rc geninfo_all_blocks=1 00:17:22.492 --rc geninfo_unexecuted_blocks=1 00:17:22.492 00:17:22.492 ' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.492 --rc genhtml_branch_coverage=1 00:17:22.492 --rc genhtml_function_coverage=1 00:17:22.492 --rc genhtml_legend=1 00:17:22.492 --rc geninfo_all_blocks=1 00:17:22.492 --rc geninfo_unexecuted_blocks=1 00:17:22.492 00:17:22.492 ' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73456 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73456 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73456 ']' 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.492 09:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:22.493 [2024-11-20 09:29:47.886554] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:22.493 [2024-11-20 09:29:47.886820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73456 ] 00:17:22.750 [2024-11-20 09:29:48.043006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.750 [2024-11-20 09:29:48.128100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.400 09:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.400 09:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:17:23.400 09:29:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:23.401 09:29:48 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:23.401 09:29:48 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:23.401 09:29:48 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:23.401 09:29:48 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:23.401 09:29:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:23.657 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:23.658 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:23.915 { 00:17:23.915 "name": "nvme0n1", 00:17:23.915 "aliases": [ 00:17:23.915 "24dc3bfa-9f94-400d-ac95-2f0aa7cb1d93" 00:17:23.915 ], 00:17:23.915 "product_name": "NVMe disk", 00:17:23.915 "block_size": 4096, 00:17:23.915 "num_blocks": 1310720, 00:17:23.915 "uuid": "24dc3bfa-9f94-400d-ac95-2f0aa7cb1d93", 00:17:23.915 "numa_id": -1, 00:17:23.915 "assigned_rate_limits": { 00:17:23.915 "rw_ios_per_sec": 0, 00:17:23.915 "rw_mbytes_per_sec": 0, 00:17:23.915 "r_mbytes_per_sec": 0, 00:17:23.915 "w_mbytes_per_sec": 0 00:17:23.915 }, 00:17:23.915 "claimed": true, 00:17:23.915 "claim_type": "read_many_write_one", 00:17:23.915 "zoned": false, 00:17:23.915 "supported_io_types": { 00:17:23.915 "read": true, 00:17:23.915 "write": true, 00:17:23.915 "unmap": true, 00:17:23.915 "flush": true, 00:17:23.915 "reset": true, 00:17:23.915 "nvme_admin": true, 00:17:23.915 "nvme_io": true, 00:17:23.915 "nvme_io_md": false, 00:17:23.915 "write_zeroes": true, 00:17:23.915 "zcopy": false, 00:17:23.915 "get_zone_info": false, 00:17:23.915 "zone_management": false, 00:17:23.915 "zone_append": false, 00:17:23.915 "compare": true, 00:17:23.915 "compare_and_write": false, 00:17:23.915 "abort": true, 00:17:23.915 "seek_hole": false, 00:17:23.915 "seek_data": false, 00:17:23.915 "copy": true, 00:17:23.915 "nvme_iov_md": false 00:17:23.915 }, 00:17:23.915 "driver_specific": { 00:17:23.915 "nvme": [ 00:17:23.915 { 00:17:23.915 "pci_address": "0000:00:11.0", 00:17:23.915 "trid": { 00:17:23.915 "trtype": "PCIe", 00:17:23.915 "traddr": "0000:00:11.0" 00:17:23.915 }, 00:17:23.915 "ctrlr_data": { 00:17:23.915 "cntlid": 0, 00:17:23.915 "vendor_id": "0x1b36", 00:17:23.915 "model_number": "QEMU NVMe Ctrl", 00:17:23.915 "serial_number": "12341", 00:17:23.915 "firmware_revision": "8.0.0", 00:17:23.915 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:23.915 "oacs": { 00:17:23.915 "security": 0, 00:17:23.915 "format": 1, 00:17:23.915 "firmware": 0, 00:17:23.915 "ns_manage": 1 00:17:23.915 }, 00:17:23.915 "multi_ctrlr": false, 00:17:23.915 "ana_reporting": false 00:17:23.915 }, 00:17:23.915 "vs": { 00:17:23.915 "nvme_version": "1.4" 00:17:23.915 }, 00:17:23.915 "ns_data": { 00:17:23.915 "id": 1, 00:17:23.915 "can_share": false 00:17:23.915 } 00:17:23.915 } 00:17:23.915 ], 00:17:23.915 "mp_policy": "active_passive" 00:17:23.915 } 00:17:23.915 } 00:17:23.915 ]' 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:23.915 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:24.173 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=99f4c114-87a4-4072-94d3-8f3d9aa873c0 00:17:24.173 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:24.173 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99f4c114-87a4-4072-94d3-8f3d9aa873c0 00:17:24.431 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:24.431 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=dd127d64-9ce6-45af-948b-f92ec9aa7f3a 00:17:24.431 09:29:49 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dd127d64-9ce6-45af-948b-f92ec9aa7f3a 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:24.689 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:24.946 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:24.946 { 00:17:24.946 "name": "795a496e-b164-47fb-81b9-ad79d3d2d06f", 00:17:24.946 "aliases": [ 00:17:24.946 "lvs/nvme0n1p0" 00:17:24.946 ], 00:17:24.946 "product_name": "Logical Volume", 00:17:24.946 "block_size": 4096, 00:17:24.946 "num_blocks": 26476544, 00:17:24.946 "uuid": "795a496e-b164-47fb-81b9-ad79d3d2d06f", 00:17:24.946 "assigned_rate_limits": { 00:17:24.946 "rw_ios_per_sec": 0, 00:17:24.946 "rw_mbytes_per_sec": 0, 00:17:24.946 "r_mbytes_per_sec": 0, 00:17:24.946 "w_mbytes_per_sec": 0 00:17:24.946 }, 00:17:24.946 "claimed": false, 00:17:24.946 "zoned": false, 00:17:24.946 "supported_io_types": { 00:17:24.946 "read": true, 00:17:24.946 "write": true, 00:17:24.946 "unmap": true, 00:17:24.946 "flush": false, 00:17:24.946 "reset": true, 00:17:24.946 "nvme_admin": false, 00:17:24.946 "nvme_io": false, 00:17:24.946 "nvme_io_md": false, 00:17:24.946 "write_zeroes": true, 00:17:24.946 "zcopy": false, 00:17:24.946 "get_zone_info": false, 00:17:24.946 "zone_management": false, 00:17:24.946 "zone_append": false, 00:17:24.946 "compare": false, 00:17:24.946 "compare_and_write": false, 00:17:24.946 "abort": false, 00:17:24.946 "seek_hole": true, 00:17:24.946 "seek_data": true, 00:17:24.946 "copy": false, 00:17:24.946 "nvme_iov_md": false 00:17:24.946 }, 00:17:24.946 "driver_specific": { 00:17:24.946 "lvol": { 00:17:24.946 "lvol_store_uuid": "dd127d64-9ce6-45af-948b-f92ec9aa7f3a", 00:17:24.946 "base_bdev": "nvme0n1", 00:17:24.946 "thin_provision": true, 00:17:24.946 "num_allocated_clusters": 0, 00:17:24.946 "snapshot": false, 00:17:24.946 "clone": false, 00:17:24.946 "esnap_clone": false 00:17:24.946 } 00:17:24.947 } 00:17:24.947 } 00:17:24.947 ]' 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:24.947 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:25.204 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.462 { 00:17:25.462 "name": "795a496e-b164-47fb-81b9-ad79d3d2d06f", 00:17:25.462 "aliases": [ 00:17:25.462 "lvs/nvme0n1p0" 00:17:25.462 ], 00:17:25.462 "product_name": "Logical Volume", 00:17:25.462 "block_size": 4096, 00:17:25.462 "num_blocks": 26476544, 00:17:25.462 "uuid": "795a496e-b164-47fb-81b9-ad79d3d2d06f", 00:17:25.462 "assigned_rate_limits": { 00:17:25.462 "rw_ios_per_sec": 0, 00:17:25.462 "rw_mbytes_per_sec": 0, 00:17:25.462 "r_mbytes_per_sec": 0, 00:17:25.462 "w_mbytes_per_sec": 0 00:17:25.462 }, 00:17:25.462 "claimed": false, 00:17:25.462 "zoned": false, 00:17:25.462 "supported_io_types": { 00:17:25.462 "read": true, 00:17:25.462 "write": true, 00:17:25.462 "unmap": true, 00:17:25.462 "flush": false, 00:17:25.462 "reset": true, 00:17:25.462 "nvme_admin": false, 00:17:25.462 "nvme_io": false, 00:17:25.462 "nvme_io_md": false, 00:17:25.462 "write_zeroes": true, 00:17:25.462 "zcopy": false, 00:17:25.462 "get_zone_info": false, 00:17:25.462 "zone_management": false, 00:17:25.462 "zone_append": false, 00:17:25.462 "compare": false, 00:17:25.462 "compare_and_write": false, 00:17:25.462 "abort": false, 00:17:25.462 "seek_hole": true, 00:17:25.462 "seek_data": true, 00:17:25.462 "copy": false, 00:17:25.462 "nvme_iov_md": false 00:17:25.462 }, 00:17:25.462 "driver_specific": { 00:17:25.462 "lvol": { 00:17:25.462 "lvol_store_uuid": "dd127d64-9ce6-45af-948b-f92ec9aa7f3a", 00:17:25.462 "base_bdev": "nvme0n1", 00:17:25.462 "thin_provision": true, 00:17:25.462 "num_allocated_clusters": 0, 00:17:25.462 "snapshot": false, 00:17:25.462 "clone": false, 00:17:25.462 "esnap_clone": false 00:17:25.462 } 00:17:25.462 } 00:17:25.462 } 00:17:25.462 ]' 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:25.462 09:29:50 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:25.720 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 795a496e-b164-47fb-81b9-ad79d3d2d06f 00:17:25.978 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.978 { 00:17:25.978 "name": "795a496e-b164-47fb-81b9-ad79d3d2d06f", 00:17:25.978 "aliases": [ 00:17:25.978 "lvs/nvme0n1p0" 00:17:25.978 ], 00:17:25.978 "product_name": "Logical Volume", 00:17:25.978 "block_size": 4096, 00:17:25.978 "num_blocks": 26476544, 00:17:25.978 "uuid": "795a496e-b164-47fb-81b9-ad79d3d2d06f", 00:17:25.978 "assigned_rate_limits": { 00:17:25.978 "rw_ios_per_sec": 0, 00:17:25.978 "rw_mbytes_per_sec": 0, 00:17:25.978 "r_mbytes_per_sec": 0, 00:17:25.978 "w_mbytes_per_sec": 0 00:17:25.978 }, 00:17:25.978 "claimed": false, 00:17:25.978 "zoned": false, 00:17:25.978 "supported_io_types": { 00:17:25.978 "read": true, 00:17:25.978 "write": true, 00:17:25.978 "unmap": true, 00:17:25.978 "flush": false, 00:17:25.978 "reset": true, 00:17:25.978 "nvme_admin": false, 00:17:25.978 "nvme_io": false, 00:17:25.978 "nvme_io_md": false, 00:17:25.978 "write_zeroes": true, 00:17:25.978 "zcopy": false, 00:17:25.978 "get_zone_info": false, 00:17:25.978 "zone_management": false, 00:17:25.978 "zone_append": false, 00:17:25.978 "compare": false, 00:17:25.978 "compare_and_write": false, 00:17:25.978 "abort": false, 00:17:25.979 "seek_hole": true, 00:17:25.979 "seek_data": true, 00:17:25.979 "copy": false, 00:17:25.979 "nvme_iov_md": false 00:17:25.979 }, 00:17:25.979 "driver_specific": { 00:17:25.979 "lvol": { 00:17:25.979 "lvol_store_uuid": "dd127d64-9ce6-45af-948b-f92ec9aa7f3a", 00:17:25.979 "base_bdev": "nvme0n1", 00:17:25.979 "thin_provision": true, 00:17:25.979 "num_allocated_clusters": 0, 00:17:25.979 "snapshot": false, 00:17:25.979 "clone": false, 00:17:25.979 "esnap_clone": false 00:17:25.979 } 00:17:25.979 } 00:17:25.979 } 00:17:25.979 ]' 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:25.979 09:29:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 795a496e-b164-47fb-81b9-ad79d3d2d06f -c nvc0n1p0 --l2p_dram_limit 20 00:17:26.237 [2024-11-20 09:29:51.478137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.478366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.237 [2024-11-20 09:29:51.478387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.237 [2024-11-20 09:29:51.478399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.478455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.478477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.237 [2024-11-20 09:29:51.478490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:26.237 [2024-11-20 09:29:51.478499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.478516] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.237 [2024-11-20 09:29:51.479269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.237 [2024-11-20 09:29:51.479284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.479293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.237 [2024-11-20 09:29:51.479313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:17:26.237 [2024-11-20 09:29:51.479323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.479436] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e472af3a-9a5e-4c67-a7ce-1c2c106eebfb 00:17:26.237 [2024-11-20 09:29:51.480500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.480531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:26.237 [2024-11-20 09:29:51.480543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:26.237 [2024-11-20 09:29:51.480553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.485434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.485557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.237 [2024-11-20 09:29:51.485575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:17:26.237 [2024-11-20 09:29:51.485583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.485669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.485678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.237 [2024-11-20 09:29:51.485690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:26.237 [2024-11-20 09:29:51.485697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.485732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.485741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.237 [2024-11-20 09:29:51.485750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:26.237 [2024-11-20 09:29:51.485757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.485777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.237 [2024-11-20 09:29:51.489344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.489374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.237 [2024-11-20 09:29:51.489383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:17:26.237 [2024-11-20 09:29:51.489393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.489423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.489433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.237 [2024-11-20 09:29:51.489441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:26.237 [2024-11-20 09:29:51.489449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.489478] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:26.237 [2024-11-20 09:29:51.489619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.237 [2024-11-20 09:29:51.489631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.237 [2024-11-20 09:29:51.489643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.237 [2024-11-20 09:29:51.489652] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.237 [2024-11-20 09:29:51.489662] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.237 [2024-11-20 09:29:51.489670] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:26.237 [2024-11-20 09:29:51.489678] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.237 [2024-11-20 09:29:51.489685] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.237 [2024-11-20 09:29:51.489694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.237 [2024-11-20 09:29:51.489701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.489711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.237 [2024-11-20 09:29:51.489718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:17:26.237 [2024-11-20 09:29:51.489728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.489808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.237 [2024-11-20 09:29:51.489818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.237 [2024-11-20 09:29:51.489825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:26.237 [2024-11-20 09:29:51.489834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.237 [2024-11-20 09:29:51.489934] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.237 [2024-11-20 09:29:51.489946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.237 [2024-11-20 09:29:51.489956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.237 [2024-11-20 09:29:51.489965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.489973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.238 [2024-11-20 09:29:51.489981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.489988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:26.238 [2024-11-20 09:29:51.489997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.238 [2024-11-20 09:29:51.490003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.238 [2024-11-20 09:29:51.490018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.238 [2024-11-20 09:29:51.490026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:26.238 [2024-11-20 09:29:51.490033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.238 [2024-11-20 09:29:51.490047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.238 [2024-11-20 09:29:51.490054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:26.238 [2024-11-20 09:29:51.490064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.238 [2024-11-20 09:29:51.490085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.238 [2024-11-20 09:29:51.490106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.238 [2024-11-20 09:29:51.490128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.238 [2024-11-20 09:29:51.490149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.238 [2024-11-20 09:29:51.490171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.238 [2024-11-20 09:29:51.490194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.238 [2024-11-20 09:29:51.490208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.238 [2024-11-20 09:29:51.490216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:26.238 [2024-11-20 09:29:51.490222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.238 [2024-11-20 09:29:51.490230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.238 [2024-11-20 09:29:51.490236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:26.238 [2024-11-20 09:29:51.490244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.238 [2024-11-20 09:29:51.490258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:26.238 [2024-11-20 09:29:51.490264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.238 [2024-11-20 09:29:51.490279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.238 [2024-11-20 09:29:51.490288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.238 [2024-11-20 09:29:51.490329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.238 [2024-11-20 09:29:51.490336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.238 [2024-11-20 09:29:51.490345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.238 [2024-11-20 09:29:51.490352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.238 [2024-11-20 09:29:51.490360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.238 [2024-11-20 09:29:51.490367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.238 [2024-11-20 09:29:51.490378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.238 [2024-11-20 09:29:51.490387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:26.238 [2024-11-20 09:29:51.490404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:26.238 [2024-11-20 09:29:51.490413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:26.238 [2024-11-20 09:29:51.490419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:26.238 [2024-11-20 09:29:51.490428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:26.238 [2024-11-20 09:29:51.490435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:26.238 [2024-11-20 09:29:51.490444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:26.238 [2024-11-20 09:29:51.490451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:26.238 [2024-11-20 09:29:51.490468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:26.238 [2024-11-20 09:29:51.490475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:26.238 [2024-11-20 09:29:51.490516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.238 [2024-11-20 09:29:51.490524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.238 [2024-11-20 09:29:51.490540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.238 [2024-11-20 09:29:51.490549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.238 [2024-11-20 09:29:51.490557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.238 [2024-11-20 09:29:51.490565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.238 [2024-11-20 09:29:51.490574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.238 [2024-11-20 09:29:51.490583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:17:26.238 [2024-11-20 09:29:51.490590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.238 [2024-11-20 09:29:51.490624] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:26.238 [2024-11-20 09:29:51.490633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:28.825 [2024-11-20 09:29:53.948708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:53.948940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:28.825 [2024-11-20 09:29:53.948969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2458.068 ms 00:17:28.825 [2024-11-20 09:29:53.948978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:53.974555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:53.974599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.825 [2024-11-20 09:29:53.974613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.375 ms 00:17:28.825 [2024-11-20 09:29:53.974621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:53.974750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:53.974761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.825 [2024-11-20 09:29:53.974773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:28.825 [2024-11-20 09:29:53.974780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.014704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.014750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.825 [2024-11-20 09:29:54.014767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.876 ms 00:17:28.825 [2024-11-20 09:29:54.014775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.014814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.014827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.825 [2024-11-20 09:29:54.014837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:28.825 [2024-11-20 09:29:54.014844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.015220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.015236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.825 [2024-11-20 09:29:54.015247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:28.825 [2024-11-20 09:29:54.015254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.015392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.015403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.825 [2024-11-20 09:29:54.015415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:28.825 [2024-11-20 09:29:54.015422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.028464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.028657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.825 [2024-11-20 09:29:54.028676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.025 ms 00:17:28.825 [2024-11-20 09:29:54.028683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.039984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:28.825 [2024-11-20 09:29:54.044993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.045028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.825 [2024-11-20 09:29:54.045039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.237 ms 00:17:28.825 [2024-11-20 09:29:54.045048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.108512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.108571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:28.825 [2024-11-20 09:29:54.108584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.439 ms 00:17:28.825 [2024-11-20 09:29:54.108593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.108769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.108783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.825 [2024-11-20 09:29:54.108792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:28.825 [2024-11-20 09:29:54.108801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.132036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.132082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:28.825 [2024-11-20 09:29:54.132095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.194 ms 00:17:28.825 [2024-11-20 09:29:54.132104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.154525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.154694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:28.825 [2024-11-20 09:29:54.154712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.386 ms 00:17:28.825 [2024-11-20 09:29:54.154721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.155285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.155322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.825 [2024-11-20 09:29:54.155333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:17:28.825 [2024-11-20 09:29:54.155342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.225432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.225645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:28.825 [2024-11-20 09:29:54.225663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.054 ms 00:17:28.825 [2024-11-20 09:29:54.225673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.249805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.249853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:28.825 [2024-11-20 09:29:54.249865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.050 ms 00:17:28.825 [2024-11-20 09:29:54.249878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.825 [2024-11-20 09:29:54.273166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.825 [2024-11-20 09:29:54.273376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:28.825 [2024-11-20 09:29:54.273393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.251 ms 00:17:28.825 [2024-11-20 09:29:54.273403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.082 [2024-11-20 09:29:54.296846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.082 [2024-11-20 09:29:54.296984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.082 [2024-11-20 09:29:54.297000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.407 ms 00:17:29.082 [2024-11-20 09:29:54.297009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.082 [2024-11-20 09:29:54.297042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.082 [2024-11-20 09:29:54.297056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.082 [2024-11-20 09:29:54.297065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:29.082 [2024-11-20 09:29:54.297073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.082 [2024-11-20 09:29:54.297148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.082 [2024-11-20 09:29:54.297160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.082 [2024-11-20 09:29:54.297168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:29.082 [2024-11-20 09:29:54.297177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.082 [2024-11-20 09:29:54.298051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2819.508 ms, result 0 00:17:29.082 { 00:17:29.082 "name": "ftl0", 00:17:29.082 "uuid": "e472af3a-9a5e-4c67-a7ce-1c2c106eebfb" 00:17:29.082 } 00:17:29.082 09:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:29.082 09:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:29.082 09:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:29.082 09:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:29.339 [2024-11-20 09:29:54.610376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:29.339 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:29.339 Zero copy mechanism will not be used. 00:17:29.339 Running I/O for 4 seconds... 00:17:31.200 3047.00 IOPS, 202.34 MiB/s [2024-11-20T09:29:58.028Z] 3144.00 IOPS, 208.78 MiB/s [2024-11-20T09:29:58.959Z] 3159.33 IOPS, 209.80 MiB/s [2024-11-20T09:29:58.959Z] 3165.25 IOPS, 210.19 MiB/s 00:17:33.503 Latency(us) 00:17:33.503 [2024-11-20T09:29:58.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.503 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:33.503 ftl0 : 4.00 3163.93 210.10 0.00 0.00 331.45 166.20 2318.97 00:17:33.503 [2024-11-20T09:29:58.959Z] =================================================================================================================== 00:17:33.503 [2024-11-20T09:29:58.959Z] Total : 3163.93 210.10 0.00 0.00 331.45 166.20 2318.97 00:17:33.503 [2024-11-20 09:29:58.620890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:33.503 { 00:17:33.503 "results": [ 00:17:33.503 { 00:17:33.503 "job": "ftl0", 00:17:33.503 "core_mask": "0x1", 00:17:33.503 "workload": "randwrite", 00:17:33.503 "status": "finished", 00:17:33.503 "queue_depth": 1, 00:17:33.503 "io_size": 69632, 00:17:33.503 "runtime": 4.001984, 00:17:33.503 "iops": 3163.930690377573, 00:17:33.503 "mibps": 210.1047724078857, 00:17:33.503 "io_failed": 0, 00:17:33.503 "io_timeout": 0, 00:17:33.503 "avg_latency_us": 331.45059013644703, 00:17:33.503 "min_latency_us": 166.20307692307694, 00:17:33.503 "max_latency_us": 2318.9661538461537 00:17:33.503 } 00:17:33.503 ], 00:17:33.503 "core_count": 1 00:17:33.503 } 00:17:33.503 09:29:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:33.503 [2024-11-20 09:29:58.740007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:33.503 Running I/O for 4 seconds... 00:17:35.365 9190.00 IOPS, 35.90 MiB/s [2024-11-20T09:30:01.795Z] 9470.00 IOPS, 36.99 MiB/s [2024-11-20T09:30:02.763Z] 8685.67 IOPS, 33.93 MiB/s [2024-11-20T09:30:03.021Z] 8424.50 IOPS, 32.91 MiB/s 00:17:37.565 Latency(us) 00:17:37.565 [2024-11-20T09:30:03.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.565 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.565 ftl0 : 4.01 8425.42 32.91 0.00 0.00 15164.03 241.03 49404.06 00:17:37.565 [2024-11-20T09:30:03.021Z] =================================================================================================================== 00:17:37.565 [2024-11-20T09:30:03.021Z] Total : 8425.42 32.91 0.00 0.00 15164.03 0.00 49404.06 00:17:37.565 { 00:17:37.565 "results": [ 00:17:37.565 { 00:17:37.565 "job": "ftl0", 00:17:37.565 "core_mask": "0x1", 00:17:37.565 "workload": "randwrite", 00:17:37.565 "status": "finished", 00:17:37.565 "queue_depth": 128, 00:17:37.565 "io_size": 4096, 00:17:37.565 "runtime": 4.013807, 00:17:37.565 "iops": 8425.417564920286, 00:17:37.565 "mibps": 32.91178736296987, 00:17:37.565 "io_failed": 0, 00:17:37.565 "io_timeout": 0, 00:17:37.565 "avg_latency_us": 15164.030455151327, 00:17:37.565 "min_latency_us": 241.03384615384616, 00:17:37.565 "max_latency_us": 49404.06153846154 00:17:37.565 } 00:17:37.565 ], 00:17:37.565 "core_count": 1 00:17:37.565 } 00:17:37.565 [2024-11-20 09:30:02.762873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:37.565 09:30:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:37.565 [2024-11-20 09:30:02.908586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:37.565 Running I/O for 4 seconds... 00:17:39.872 6687.00 IOPS, 26.12 MiB/s [2024-11-20T09:30:06.260Z] 7567.00 IOPS, 29.56 MiB/s [2024-11-20T09:30:07.258Z] 7821.00 IOPS, 30.55 MiB/s [2024-11-20T09:30:07.258Z] 8352.75 IOPS, 32.63 MiB/s 00:17:41.802 Latency(us) 00:17:41.802 [2024-11-20T09:30:07.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.802 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.802 Verification LBA range: start 0x0 length 0x1400000 00:17:41.802 ftl0 : 4.01 8369.23 32.69 0.00 0.00 15251.57 220.55 75820.11 00:17:41.802 [2024-11-20T09:30:07.258Z] =================================================================================================================== 00:17:41.802 [2024-11-20T09:30:07.258Z] Total : 8369.23 32.69 0.00 0.00 15251.57 0.00 75820.11 00:17:41.802 [2024-11-20 09:30:06.928727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:17:41.802 "results": [ 00:17:41.802 { 00:17:41.802 "job": "ftl0", 00:17:41.802 "core_mask": "0x1", 00:17:41.802 "workload": "verify", 00:17:41.802 "status": "finished", 00:17:41.802 "verify_range": { 00:17:41.802 "start": 0, 00:17:41.802 "length": 20971520 00:17:41.802 }, 00:17:41.802 "queue_depth": 128, 00:17:41.802 "io_size": 4096, 00:17:41.802 "runtime": 4.0073, 00:17:41.802 "iops": 8369.226162253886, 00:17:41.802 "mibps": 32.69228969630424, 00:17:41.802 "io_failed": 0, 00:17:41.802 "io_timeout": 0, 00:17:41.802 "avg_latency_us": 15251.568204700063, 00:17:41.802 "min_latency_us": 220.55384615384617, 00:17:41.802 "max_latency_us": 75820.11076923077 00:17:41.802 } 00:17:41.802 ], 00:17:41.802 "core_count": 1 00:17:41.802 } 00:17:41.802 l0 00:17:41.802 09:30:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:41.802 [2024-11-20 09:30:07.133260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.802 [2024-11-20 09:30:07.133424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.802 [2024-11-20 09:30:07.133478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.802 [2024-11-20 09:30:07.133501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.802 [2024-11-20 09:30:07.133533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.802 [2024-11-20 09:30:07.135644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.802 [2024-11-20 09:30:07.135733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.802 [2024-11-20 09:30:07.135788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:17:41.802 [2024-11-20 09:30:07.135808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.802 [2024-11-20 09:30:07.137312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.802 [2024-11-20 09:30:07.137399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.802 [2024-11-20 09:30:07.137467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:17:41.802 [2024-11-20 09:30:07.137487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.259150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.259322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.061 [2024-11-20 09:30:07.259382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.624 ms 00:17:42.061 [2024-11-20 09:30:07.259403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.264367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.264454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.061 [2024-11-20 09:30:07.264497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.925 ms 00:17:42.061 [2024-11-20 09:30:07.264515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.282978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.283076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.061 [2024-11-20 09:30:07.283122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.402 ms 00:17:42.061 [2024-11-20 09:30:07.283140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.295019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.295117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.061 [2024-11-20 09:30:07.295166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.845 ms 00:17:42.061 [2024-11-20 09:30:07.295185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.295290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.295328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.061 [2024-11-20 09:30:07.295349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:42.061 [2024-11-20 09:30:07.295365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.313246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.313348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.061 [2024-11-20 09:30:07.313393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.811 ms 00:17:42.061 [2024-11-20 09:30:07.313410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.330719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.330812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.061 [2024-11-20 09:30:07.330854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.275 ms 00:17:42.061 [2024-11-20 09:30:07.330871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.348224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.348317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.061 [2024-11-20 09:30:07.348361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.321 ms 00:17:42.061 [2024-11-20 09:30:07.348379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.365419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.061 [2024-11-20 09:30:07.365504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.061 [2024-11-20 09:30:07.365549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.982 ms 00:17:42.061 [2024-11-20 09:30:07.365566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.061 [2024-11-20 09:30:07.365597] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.061 [2024-11-20 09:30:07.365619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.365924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.061 [2024-11-20 09:30:07.366202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.366992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.062 [2024-11-20 09:30:07.367350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.062 [2024-11-20 09:30:07.367358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e472af3a-9a5e-4c67-a7ce-1c2c106eebfb 00:17:42.062 [2024-11-20 09:30:07.367364] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.062 [2024-11-20 09:30:07.367371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.062 [2024-11-20 09:30:07.367378] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.062 [2024-11-20 09:30:07.367388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.062 [2024-11-20 09:30:07.367393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.062 [2024-11-20 09:30:07.367401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.062 [2024-11-20 09:30:07.367406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.063 [2024-11-20 09:30:07.367415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.063 [2024-11-20 09:30:07.367420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.063 [2024-11-20 09:30:07.367427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.063 [2024-11-20 09:30:07.367433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.063 [2024-11-20 09:30:07.367441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:17:42.063 [2024-11-20 09:30:07.367446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.377449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.063 [2024-11-20 09:30:07.377532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.063 [2024-11-20 09:30:07.377578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.977 ms 00:17:42.063 [2024-11-20 09:30:07.377596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.377873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.063 [2024-11-20 09:30:07.377894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.063 [2024-11-20 09:30:07.377961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:17:42.063 [2024-11-20 09:30:07.377979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.405649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.063 [2024-11-20 09:30:07.405752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.063 [2024-11-20 09:30:07.405803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.063 [2024-11-20 09:30:07.405821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.405880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.063 [2024-11-20 09:30:07.405897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.063 [2024-11-20 09:30:07.405913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.063 [2024-11-20 09:30:07.405928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.405992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.063 [2024-11-20 09:30:07.406053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.063 [2024-11-20 09:30:07.406086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.063 [2024-11-20 09:30:07.406100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.406124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.063 [2024-11-20 09:30:07.406141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.063 [2024-11-20 09:30:07.406157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.063 [2024-11-20 09:30:07.406172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.063 [2024-11-20 09:30:07.467389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.063 [2024-11-20 09:30:07.467531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.063 [2024-11-20 09:30:07.467576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.063 [2024-11-20 09:30:07.467594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.516492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.516655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.321 [2024-11-20 09:30:07.516700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.516717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.516806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.516826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.321 [2024-11-20 09:30:07.516846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.516884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.516936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.516954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.321 [2024-11-20 09:30:07.516970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.516985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.517073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.517159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.321 [2024-11-20 09:30:07.517226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.517244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.517316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.517338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.321 [2024-11-20 09:30:07.517355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.517370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.517410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.517559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.321 [2024-11-20 09:30:07.517580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.517597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.517643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.321 [2024-11-20 09:30:07.517668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.321 [2024-11-20 09:30:07.517716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.321 [2024-11-20 09:30:07.517734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-20 09:30:07.517837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.546 ms, result 0 00:17:42.321 true 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73456 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73456 ']' 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73456 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73456 00:17:42.321 killing process with pid 73456 00:17:42.321 Received shutdown signal, test time was about 4.000000 seconds 00:17:42.321 00:17:42.321 Latency(us) 00:17:42.321 [2024-11-20T09:30:07.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.321 [2024-11-20T09:30:07.777Z] =================================================================================================================== 00:17:42.321 [2024-11-20T09:30:07.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73456' 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73456 00:17:42.321 09:30:07 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73456 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:42.887 Remove shared memory files 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:42.887 ************************************ 00:17:42.887 END TEST ftl_bdevperf 00:17:42.887 ************************************ 00:17:42.887 00:17:42.887 real 0m20.542s 00:17:42.887 user 0m23.237s 00:17:42.887 sys 0m0.803s 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.887 09:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:42.887 09:30:08 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:42.887 09:30:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:42.887 09:30:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.887 09:30:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:42.887 ************************************ 00:17:42.887 START TEST ftl_trim 00:17:42.887 ************************************ 00:17:42.887 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:43.145 * Looking for test storage... 00:17:43.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.145 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.145 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.145 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.145 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.145 09:30:08 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.146 09:30:08 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.146 --rc genhtml_branch_coverage=1 00:17:43.146 --rc genhtml_function_coverage=1 00:17:43.146 --rc genhtml_legend=1 00:17:43.146 --rc geninfo_all_blocks=1 00:17:43.146 --rc geninfo_unexecuted_blocks=1 00:17:43.146 00:17:43.146 ' 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.146 --rc genhtml_branch_coverage=1 00:17:43.146 --rc genhtml_function_coverage=1 00:17:43.146 --rc genhtml_legend=1 00:17:43.146 --rc geninfo_all_blocks=1 00:17:43.146 --rc geninfo_unexecuted_blocks=1 00:17:43.146 00:17:43.146 ' 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.146 --rc genhtml_branch_coverage=1 00:17:43.146 --rc genhtml_function_coverage=1 00:17:43.146 --rc genhtml_legend=1 00:17:43.146 --rc geninfo_all_blocks=1 00:17:43.146 --rc geninfo_unexecuted_blocks=1 00:17:43.146 00:17:43.146 ' 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.146 --rc genhtml_branch_coverage=1 00:17:43.146 --rc genhtml_function_coverage=1 00:17:43.146 --rc genhtml_legend=1 00:17:43.146 --rc geninfo_all_blocks=1 00:17:43.146 --rc geninfo_unexecuted_blocks=1 00:17:43.146 00:17:43.146 ' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73786 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73786 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73786 ']' 00:17:43.146 09:30:08 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.146 09:30:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:43.146 [2024-11-20 09:30:08.548631] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:43.146 [2024-11-20 09:30:08.548898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73786 ] 00:17:43.405 [2024-11-20 09:30:08.710424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.405 [2024-11-20 09:30:08.816920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.405 [2024-11-20 09:30:08.817755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.405 [2024-11-20 09:30:08.817839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.969 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.969 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:43.969 09:30:09 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:43.969 09:30:09 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:43.969 09:30:09 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:43.969 09:30:09 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:43.969 09:30:09 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:43.969 09:30:09 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:44.534 { 00:17:44.534 "name": "nvme0n1", 00:17:44.534 "aliases": [ 00:17:44.534 "16535a30-e10a-4ecf-be71-49719d24dfed" 00:17:44.534 ], 00:17:44.534 "product_name": "NVMe disk", 00:17:44.534 "block_size": 4096, 00:17:44.534 "num_blocks": 1310720, 00:17:44.534 "uuid": "16535a30-e10a-4ecf-be71-49719d24dfed", 00:17:44.534 "numa_id": -1, 00:17:44.534 "assigned_rate_limits": { 00:17:44.534 "rw_ios_per_sec": 0, 00:17:44.534 "rw_mbytes_per_sec": 0, 00:17:44.534 "r_mbytes_per_sec": 0, 00:17:44.534 "w_mbytes_per_sec": 0 00:17:44.534 }, 00:17:44.534 "claimed": true, 00:17:44.534 "claim_type": "read_many_write_one", 00:17:44.534 "zoned": false, 00:17:44.534 "supported_io_types": { 00:17:44.534 "read": true, 00:17:44.534 "write": true, 00:17:44.534 "unmap": true, 00:17:44.534 "flush": true, 00:17:44.534 "reset": true, 00:17:44.534 "nvme_admin": true, 00:17:44.534 "nvme_io": true, 00:17:44.534 "nvme_io_md": false, 00:17:44.534 "write_zeroes": true, 00:17:44.534 "zcopy": false, 00:17:44.534 "get_zone_info": false, 00:17:44.534 "zone_management": false, 00:17:44.534 "zone_append": false, 00:17:44.534 "compare": true, 00:17:44.534 "compare_and_write": false, 00:17:44.534 "abort": true, 00:17:44.534 "seek_hole": false, 00:17:44.534 "seek_data": false, 00:17:44.534 "copy": true, 00:17:44.534 "nvme_iov_md": false 00:17:44.534 }, 00:17:44.534 "driver_specific": { 00:17:44.534 "nvme": [ 00:17:44.534 { 00:17:44.534 "pci_address": "0000:00:11.0", 00:17:44.534 "trid": { 00:17:44.534 "trtype": "PCIe", 00:17:44.534 "traddr": "0000:00:11.0" 00:17:44.534 }, 00:17:44.534 "ctrlr_data": { 00:17:44.534 "cntlid": 0, 00:17:44.534 "vendor_id": "0x1b36", 00:17:44.534 "model_number": "QEMU NVMe Ctrl", 00:17:44.534 "serial_number": "12341", 00:17:44.534 "firmware_revision": "8.0.0", 00:17:44.534 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:44.534 "oacs": { 00:17:44.534 "security": 0, 00:17:44.534 "format": 1, 00:17:44.534 "firmware": 0, 00:17:44.534 "ns_manage": 1 00:17:44.534 }, 00:17:44.534 "multi_ctrlr": false, 00:17:44.534 "ana_reporting": false 00:17:44.534 }, 00:17:44.534 "vs": { 00:17:44.534 "nvme_version": "1.4" 00:17:44.534 }, 00:17:44.534 "ns_data": { 00:17:44.534 "id": 1, 00:17:44.534 "can_share": false 00:17:44.534 } 00:17:44.534 } 00:17:44.534 ], 00:17:44.534 "mp_policy": "active_passive" 00:17:44.534 } 00:17:44.534 } 00:17:44.534 ]' 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:44.534 09:30:09 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:44.534 09:30:09 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:45.100 09:30:10 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=dd127d64-9ce6-45af-948b-f92ec9aa7f3a 00:17:45.100 09:30:10 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:45.100 09:30:10 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd127d64-9ce6-45af-948b-f92ec9aa7f3a 00:17:45.100 09:30:10 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:45.358 09:30:10 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ce1061f0-b198-4fbe-ba08-bb3364fe8312 00:17:45.358 09:30:10 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ce1061f0-b198-4fbe-ba08-bb3364fe8312 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:45.615 09:30:10 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:45.615 09:30:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:45.615 09:30:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:45.615 09:30:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:45.615 09:30:10 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:45.615 09:30:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:45.872 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:45.872 { 00:17:45.872 "name": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:45.872 "aliases": [ 00:17:45.872 "lvs/nvme0n1p0" 00:17:45.872 ], 00:17:45.872 "product_name": "Logical Volume", 00:17:45.872 "block_size": 4096, 00:17:45.872 "num_blocks": 26476544, 00:17:45.872 "uuid": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:45.872 "assigned_rate_limits": { 00:17:45.872 "rw_ios_per_sec": 0, 00:17:45.872 "rw_mbytes_per_sec": 0, 00:17:45.872 "r_mbytes_per_sec": 0, 00:17:45.872 "w_mbytes_per_sec": 0 00:17:45.872 }, 00:17:45.872 "claimed": false, 00:17:45.872 "zoned": false, 00:17:45.872 "supported_io_types": { 00:17:45.872 "read": true, 00:17:45.872 "write": true, 00:17:45.872 "unmap": true, 00:17:45.872 "flush": false, 00:17:45.872 "reset": true, 00:17:45.872 "nvme_admin": false, 00:17:45.872 "nvme_io": false, 00:17:45.872 "nvme_io_md": false, 00:17:45.872 "write_zeroes": true, 00:17:45.872 "zcopy": false, 00:17:45.872 "get_zone_info": false, 00:17:45.872 "zone_management": false, 00:17:45.872 "zone_append": false, 00:17:45.872 "compare": false, 00:17:45.872 "compare_and_write": false, 00:17:45.872 "abort": false, 00:17:45.872 "seek_hole": true, 00:17:45.872 "seek_data": true, 00:17:45.872 "copy": false, 00:17:45.872 "nvme_iov_md": false 00:17:45.872 }, 00:17:45.872 "driver_specific": { 00:17:45.872 "lvol": { 00:17:45.872 "lvol_store_uuid": "ce1061f0-b198-4fbe-ba08-bb3364fe8312", 00:17:45.872 "base_bdev": "nvme0n1", 00:17:45.872 "thin_provision": true, 00:17:45.872 "num_allocated_clusters": 0, 00:17:45.872 "snapshot": false, 00:17:45.872 "clone": false, 00:17:45.872 "esnap_clone": false 00:17:45.872 } 00:17:45.872 } 00:17:45.872 } 00:17:45.872 ]' 00:17:45.873 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:45.873 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:45.873 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:45.873 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:45.873 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:45.873 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:45.873 09:30:11 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:45.873 09:30:11 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:45.873 09:30:11 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:46.129 09:30:11 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:46.129 09:30:11 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:46.129 09:30:11 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:46.129 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:46.129 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:46.129 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:46.129 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:46.129 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:46.386 { 00:17:46.386 "name": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:46.386 "aliases": [ 00:17:46.386 "lvs/nvme0n1p0" 00:17:46.386 ], 00:17:46.386 "product_name": "Logical Volume", 00:17:46.386 "block_size": 4096, 00:17:46.386 "num_blocks": 26476544, 00:17:46.386 "uuid": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:46.386 "assigned_rate_limits": { 00:17:46.386 "rw_ios_per_sec": 0, 00:17:46.386 "rw_mbytes_per_sec": 0, 00:17:46.386 "r_mbytes_per_sec": 0, 00:17:46.386 "w_mbytes_per_sec": 0 00:17:46.386 }, 00:17:46.386 "claimed": false, 00:17:46.386 "zoned": false, 00:17:46.386 "supported_io_types": { 00:17:46.386 "read": true, 00:17:46.386 "write": true, 00:17:46.386 "unmap": true, 00:17:46.386 "flush": false, 00:17:46.386 "reset": true, 00:17:46.386 "nvme_admin": false, 00:17:46.386 "nvme_io": false, 00:17:46.386 "nvme_io_md": false, 00:17:46.386 "write_zeroes": true, 00:17:46.386 "zcopy": false, 00:17:46.386 "get_zone_info": false, 00:17:46.386 "zone_management": false, 00:17:46.386 "zone_append": false, 00:17:46.386 "compare": false, 00:17:46.386 "compare_and_write": false, 00:17:46.386 "abort": false, 00:17:46.386 "seek_hole": true, 00:17:46.386 "seek_data": true, 00:17:46.386 "copy": false, 00:17:46.386 "nvme_iov_md": false 00:17:46.386 }, 00:17:46.386 "driver_specific": { 00:17:46.386 "lvol": { 00:17:46.386 "lvol_store_uuid": "ce1061f0-b198-4fbe-ba08-bb3364fe8312", 00:17:46.386 "base_bdev": "nvme0n1", 00:17:46.386 "thin_provision": true, 00:17:46.386 "num_allocated_clusters": 0, 00:17:46.386 "snapshot": false, 00:17:46.386 "clone": false, 00:17:46.386 "esnap_clone": false 00:17:46.386 } 00:17:46.386 } 00:17:46.386 } 00:17:46.386 ]' 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:46.386 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:46.386 09:30:11 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:46.386 09:30:11 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:46.642 09:30:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:46.642 09:30:11 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:46.642 09:30:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:46.642 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:46.642 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:46.642 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:46.642 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:46.642 09:30:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:46.900 { 00:17:46.900 "name": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:46.900 "aliases": [ 00:17:46.900 "lvs/nvme0n1p0" 00:17:46.900 ], 00:17:46.900 "product_name": "Logical Volume", 00:17:46.900 "block_size": 4096, 00:17:46.900 "num_blocks": 26476544, 00:17:46.900 "uuid": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:46.900 "assigned_rate_limits": { 00:17:46.900 "rw_ios_per_sec": 0, 00:17:46.900 "rw_mbytes_per_sec": 0, 00:17:46.900 "r_mbytes_per_sec": 0, 00:17:46.900 "w_mbytes_per_sec": 0 00:17:46.900 }, 00:17:46.900 "claimed": false, 00:17:46.900 "zoned": false, 00:17:46.900 "supported_io_types": { 00:17:46.900 "read": true, 00:17:46.900 "write": true, 00:17:46.900 "unmap": true, 00:17:46.900 "flush": false, 00:17:46.900 "reset": true, 00:17:46.900 "nvme_admin": false, 00:17:46.900 "nvme_io": false, 00:17:46.900 "nvme_io_md": false, 00:17:46.900 "write_zeroes": true, 00:17:46.900 "zcopy": false, 00:17:46.900 "get_zone_info": false, 00:17:46.900 "zone_management": false, 00:17:46.900 "zone_append": false, 00:17:46.900 "compare": false, 00:17:46.900 "compare_and_write": false, 00:17:46.900 "abort": false, 00:17:46.900 "seek_hole": true, 00:17:46.900 "seek_data": true, 00:17:46.900 "copy": false, 00:17:46.900 "nvme_iov_md": false 00:17:46.900 }, 00:17:46.900 "driver_specific": { 00:17:46.900 "lvol": { 00:17:46.900 "lvol_store_uuid": "ce1061f0-b198-4fbe-ba08-bb3364fe8312", 00:17:46.900 "base_bdev": "nvme0n1", 00:17:46.900 "thin_provision": true, 00:17:46.900 "num_allocated_clusters": 0, 00:17:46.900 "snapshot": false, 00:17:46.900 "clone": false, 00:17:46.900 "esnap_clone": false 00:17:46.900 } 00:17:46.900 } 00:17:46.900 } 00:17:46.900 ]' 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:46.900 09:30:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:46.900 09:30:12 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:46.900 09:30:12 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d8f2715-e5bf-40a8-9de7-8ba9bb95268a -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:47.190 [2024-11-20 09:30:12.359669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.359718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:47.190 [2024-11-20 09:30:12.359733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:47.190 [2024-11-20 09:30:12.359742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.362665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.362701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:47.190 [2024-11-20 09:30:12.362714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:17:47.190 [2024-11-20 09:30:12.362723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.362831] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:47.190 [2024-11-20 09:30:12.363533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:47.190 [2024-11-20 09:30:12.363557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.363566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:47.190 [2024-11-20 09:30:12.363576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:17:47.190 [2024-11-20 09:30:12.363583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.363676] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:17:47.190 [2024-11-20 09:30:12.364765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.364796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:47.190 [2024-11-20 09:30:12.364807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:47.190 [2024-11-20 09:30:12.364815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.370510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.370616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:47.190 [2024-11-20 09:30:12.370671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:17:47.190 [2024-11-20 09:30:12.370699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.370850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.370884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:47.190 [2024-11-20 09:30:12.370909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:47.190 [2024-11-20 09:30:12.370934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.370974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.371054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:47.190 [2024-11-20 09:30:12.371082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:47.190 [2024-11-20 09:30:12.371104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.371145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:47.190 [2024-11-20 09:30:12.374774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.374870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:47.190 [2024-11-20 09:30:12.374929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.631 ms 00:17:47.190 [2024-11-20 09:30:12.374952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.375024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.375053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:47.190 [2024-11-20 09:30:12.375076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:47.190 [2024-11-20 09:30:12.375105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.375142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:47.190 [2024-11-20 09:30:12.375294] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:47.190 [2024-11-20 09:30:12.375350] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:47.190 [2024-11-20 09:30:12.375420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:47.190 [2024-11-20 09:30:12.375459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:47.190 [2024-11-20 09:30:12.375516] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:47.190 [2024-11-20 09:30:12.375552] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:47.190 [2024-11-20 09:30:12.375572] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:47.190 [2024-11-20 09:30:12.375593] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:47.190 [2024-11-20 09:30:12.375614] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:47.190 [2024-11-20 09:30:12.375636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.375655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:47.190 [2024-11-20 09:30:12.375676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:17:47.190 [2024-11-20 09:30:12.375695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.375807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.190 [2024-11-20 09:30:12.375831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:47.190 [2024-11-20 09:30:12.375906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:47.190 [2024-11-20 09:30:12.375929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.190 [2024-11-20 09:30:12.376056] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:47.190 [2024-11-20 09:30:12.376083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:47.190 [2024-11-20 09:30:12.376106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:47.190 [2024-11-20 09:30:12.376126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.190 [2024-11-20 09:30:12.376147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:47.190 [2024-11-20 09:30:12.376165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:47.190 [2024-11-20 09:30:12.376186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:47.190 [2024-11-20 09:30:12.376205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:47.191 [2024-11-20 09:30:12.376225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:47.191 [2024-11-20 09:30:12.376311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:47.191 [2024-11-20 09:30:12.376335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:47.191 [2024-11-20 09:30:12.376355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:47.191 [2024-11-20 09:30:12.376403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:47.191 [2024-11-20 09:30:12.376428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:47.191 [2024-11-20 09:30:12.376447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:47.191 [2024-11-20 09:30:12.376513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:47.191 [2024-11-20 09:30:12.376537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:47.191 [2024-11-20 09:30:12.376583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.191 [2024-11-20 09:30:12.376622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:47.191 [2024-11-20 09:30:12.376640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.191 [2024-11-20 09:30:12.376678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:47.191 [2024-11-20 09:30:12.376697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.191 [2024-11-20 09:30:12.376735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:47.191 [2024-11-20 09:30:12.376753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.191 [2024-11-20 09:30:12.376830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:47.191 [2024-11-20 09:30:12.376851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:47.191 [2024-11-20 09:30:12.376893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:47.191 [2024-11-20 09:30:12.376916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:47.191 [2024-11-20 09:30:12.376934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:47.191 [2024-11-20 09:30:12.376979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:47.191 [2024-11-20 09:30:12.377000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:47.191 [2024-11-20 09:30:12.377041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:47.191 [2024-11-20 09:30:12.377062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.191 [2024-11-20 09:30:12.377135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:47.191 [2024-11-20 09:30:12.377157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:47.191 [2024-11-20 09:30:12.377177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.191 [2024-11-20 09:30:12.377195] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:47.191 [2024-11-20 09:30:12.377215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:47.191 [2024-11-20 09:30:12.377246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:47.191 [2024-11-20 09:30:12.377268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.191 [2024-11-20 09:30:12.377277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:47.191 [2024-11-20 09:30:12.377288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:47.191 [2024-11-20 09:30:12.377294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:47.191 [2024-11-20 09:30:12.377345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:47.191 [2024-11-20 09:30:12.377404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:47.191 [2024-11-20 09:30:12.377418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:47.191 [2024-11-20 09:30:12.377430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:47.191 [2024-11-20 09:30:12.377443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:47.191 [2024-11-20 09:30:12.377460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:47.191 [2024-11-20 09:30:12.377468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:47.191 [2024-11-20 09:30:12.377477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:47.191 [2024-11-20 09:30:12.377484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:47.191 [2024-11-20 09:30:12.377492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:47.191 [2024-11-20 09:30:12.377499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:47.191 [2024-11-20 09:30:12.377508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:47.191 [2024-11-20 09:30:12.377515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:47.191 [2024-11-20 09:30:12.377525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:47.191 [2024-11-20 09:30:12.377563] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:47.191 [2024-11-20 09:30:12.377579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:47.191 [2024-11-20 09:30:12.377595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:47.191 [2024-11-20 09:30:12.377602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:47.191 [2024-11-20 09:30:12.377611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:47.191 [2024-11-20 09:30:12.377619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.191 [2024-11-20 09:30:12.377628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:47.191 [2024-11-20 09:30:12.377635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:17:47.191 [2024-11-20 09:30:12.377644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.191 [2024-11-20 09:30:12.377718] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:47.191 [2024-11-20 09:30:12.377730] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:49.716 [2024-11-20 09:30:15.030252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.030793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:49.716 [2024-11-20 09:30:15.030908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2652.523 ms 00:17:49.716 [2024-11-20 09:30:15.030940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.056684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.056898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.716 [2024-11-20 09:30:15.056968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.435 ms 00:17:49.716 [2024-11-20 09:30:15.056994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.057152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.057295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:49.716 [2024-11-20 09:30:15.057342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:49.716 [2024-11-20 09:30:15.057367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.098179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.098427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.716 [2024-11-20 09:30:15.098519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.750 ms 00:17:49.716 [2024-11-20 09:30:15.098550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.098689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.098832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.716 [2024-11-20 09:30:15.098868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.716 [2024-11-20 09:30:15.098898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.099316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.099453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.716 [2024-11-20 09:30:15.099528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:17:49.716 [2024-11-20 09:30:15.099564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.099755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.099797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.716 [2024-11-20 09:30:15.099871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:49.716 [2024-11-20 09:30:15.099908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.117020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.117151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.716 [2024-11-20 09:30:15.117217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:17:49.716 [2024-11-20 09:30:15.117243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.716 [2024-11-20 09:30:15.128647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:49.716 [2024-11-20 09:30:15.144274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.716 [2024-11-20 09:30:15.144466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:49.716 [2024-11-20 09:30:15.144486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.885 ms 00:17:49.716 [2024-11-20 09:30:15.144495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.207087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.207147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:49.976 [2024-11-20 09:30:15.207162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.500 ms 00:17:49.976 [2024-11-20 09:30:15.207171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.207429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.207442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:49.976 [2024-11-20 09:30:15.207455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:17:49.976 [2024-11-20 09:30:15.207462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.230902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.231090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:49.976 [2024-11-20 09:30:15.231113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.402 ms 00:17:49.976 [2024-11-20 09:30:15.231122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.254232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.254365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:49.976 [2024-11-20 09:30:15.254438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:17:49.976 [2024-11-20 09:30:15.254460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.255075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.255162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:49.976 [2024-11-20 09:30:15.255222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:49.976 [2024-11-20 09:30:15.255245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.326287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.326453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:49.976 [2024-11-20 09:30:15.326559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.991 ms 00:17:49.976 [2024-11-20 09:30:15.326585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.350927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.351056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:49.976 [2024-11-20 09:30:15.351111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.232 ms 00:17:49.976 [2024-11-20 09:30:15.351134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.374610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.374742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:49.976 [2024-11-20 09:30:15.374822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.406 ms 00:17:49.976 [2024-11-20 09:30:15.374845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.397858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.398030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:49.976 [2024-11-20 09:30:15.398087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.930 ms 00:17:49.976 [2024-11-20 09:30:15.398124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.398205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.398234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:49.976 [2024-11-20 09:30:15.398258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:49.976 [2024-11-20 09:30:15.398345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.398445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.976 [2024-11-20 09:30:15.398469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:49.976 [2024-11-20 09:30:15.398510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:49.976 [2024-11-20 09:30:15.398736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.976 [2024-11-20 09:30:15.399663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.976 [2024-11-20 09:30:15.403142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3039.687 ms, result 0 00:17:49.976 [2024-11-20 09:30:15.404003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.976 { 00:17:49.976 "name": "ftl0", 00:17:49.976 "uuid": "eba3f3fd-920f-46ac-aa10-0eb07aaa862a" 00:17:49.976 } 00:17:49.976 09:30:15 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:49.977 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:49.977 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.977 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:17:49.977 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.977 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.977 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:50.234 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:50.492 [ 00:17:50.492 { 00:17:50.492 "name": "ftl0", 00:17:50.492 "aliases": [ 00:17:50.492 "eba3f3fd-920f-46ac-aa10-0eb07aaa862a" 00:17:50.492 ], 00:17:50.492 "product_name": "FTL disk", 00:17:50.492 "block_size": 4096, 00:17:50.492 "num_blocks": 23592960, 00:17:50.492 "uuid": "eba3f3fd-920f-46ac-aa10-0eb07aaa862a", 00:17:50.492 "assigned_rate_limits": { 00:17:50.492 "rw_ios_per_sec": 0, 00:17:50.492 "rw_mbytes_per_sec": 0, 00:17:50.492 "r_mbytes_per_sec": 0, 00:17:50.492 "w_mbytes_per_sec": 0 00:17:50.492 }, 00:17:50.492 "claimed": false, 00:17:50.492 "zoned": false, 00:17:50.492 "supported_io_types": { 00:17:50.492 "read": true, 00:17:50.492 "write": true, 00:17:50.492 "unmap": true, 00:17:50.492 "flush": true, 00:17:50.492 "reset": false, 00:17:50.492 "nvme_admin": false, 00:17:50.492 "nvme_io": false, 00:17:50.492 "nvme_io_md": false, 00:17:50.492 "write_zeroes": true, 00:17:50.492 "zcopy": false, 00:17:50.492 "get_zone_info": false, 00:17:50.492 "zone_management": false, 00:17:50.492 "zone_append": false, 00:17:50.492 "compare": false, 00:17:50.492 "compare_and_write": false, 00:17:50.492 "abort": false, 00:17:50.492 "seek_hole": false, 00:17:50.492 "seek_data": false, 00:17:50.492 "copy": false, 00:17:50.492 "nvme_iov_md": false 00:17:50.492 }, 00:17:50.492 "driver_specific": { 00:17:50.492 "ftl": { 00:17:50.492 "base_bdev": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:50.492 "cache": "nvc0n1p0" 00:17:50.492 } 00:17:50.492 } 00:17:50.492 } 00:17:50.492 ] 00:17:50.492 09:30:15 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:17:50.492 09:30:15 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:50.492 09:30:15 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:50.750 09:30:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:50.750 09:30:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:51.007 09:30:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:51.007 { 00:17:51.007 "name": "ftl0", 00:17:51.007 "aliases": [ 00:17:51.007 "eba3f3fd-920f-46ac-aa10-0eb07aaa862a" 00:17:51.007 ], 00:17:51.007 "product_name": "FTL disk", 00:17:51.007 "block_size": 4096, 00:17:51.007 "num_blocks": 23592960, 00:17:51.007 "uuid": "eba3f3fd-920f-46ac-aa10-0eb07aaa862a", 00:17:51.007 "assigned_rate_limits": { 00:17:51.007 "rw_ios_per_sec": 0, 00:17:51.007 "rw_mbytes_per_sec": 0, 00:17:51.007 "r_mbytes_per_sec": 0, 00:17:51.007 "w_mbytes_per_sec": 0 00:17:51.007 }, 00:17:51.007 "claimed": false, 00:17:51.007 "zoned": false, 00:17:51.007 "supported_io_types": { 00:17:51.007 "read": true, 00:17:51.007 "write": true, 00:17:51.007 "unmap": true, 00:17:51.007 "flush": true, 00:17:51.007 "reset": false, 00:17:51.007 "nvme_admin": false, 00:17:51.007 "nvme_io": false, 00:17:51.007 "nvme_io_md": false, 00:17:51.007 "write_zeroes": true, 00:17:51.007 "zcopy": false, 00:17:51.007 "get_zone_info": false, 00:17:51.007 "zone_management": false, 00:17:51.007 "zone_append": false, 00:17:51.007 "compare": false, 00:17:51.007 "compare_and_write": false, 00:17:51.007 "abort": false, 00:17:51.007 "seek_hole": false, 00:17:51.007 "seek_data": false, 00:17:51.007 "copy": false, 00:17:51.007 "nvme_iov_md": false 00:17:51.007 }, 00:17:51.007 "driver_specific": { 00:17:51.007 "ftl": { 00:17:51.007 "base_bdev": "2d8f2715-e5bf-40a8-9de7-8ba9bb95268a", 00:17:51.007 "cache": "nvc0n1p0" 00:17:51.007 } 00:17:51.007 } 00:17:51.007 } 00:17:51.007 ]' 00:17:51.007 09:30:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:51.007 09:30:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:51.007 09:30:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:51.266 [2024-11-20 09:30:16.487367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.487423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:51.266 [2024-11-20 09:30:16.487439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:51.266 [2024-11-20 09:30:16.487451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.487484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:51.266 [2024-11-20 09:30:16.490070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.490241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:51.266 [2024-11-20 09:30:16.490265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:17:51.266 [2024-11-20 09:30:16.490273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.490803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.490820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:51.266 [2024-11-20 09:30:16.490831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:17:51.266 [2024-11-20 09:30:16.490838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.494473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.494503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:51.266 [2024-11-20 09:30:16.494515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:17:51.266 [2024-11-20 09:30:16.494523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.501454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.501489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:51.266 [2024-11-20 09:30:16.501502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.866 ms 00:17:51.266 [2024-11-20 09:30:16.501509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.526035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.526088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:51.266 [2024-11-20 09:30:16.526106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.447 ms 00:17:51.266 [2024-11-20 09:30:16.526115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.541089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.541283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:51.266 [2024-11-20 09:30:16.541322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.890 ms 00:17:51.266 [2024-11-20 09:30:16.541334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.541546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.541558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:51.266 [2024-11-20 09:30:16.541568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:51.266 [2024-11-20 09:30:16.541575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.565438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.565480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:51.266 [2024-11-20 09:30:16.565493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.832 ms 00:17:51.266 [2024-11-20 09:30:16.565503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.588982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.589028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:51.266 [2024-11-20 09:30:16.589046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.400 ms 00:17:51.266 [2024-11-20 09:30:16.589053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.612192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.266 [2024-11-20 09:30:16.612345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:51.266 [2024-11-20 09:30:16.612366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.063 ms 00:17:51.266 [2024-11-20 09:30:16.612374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.266 [2024-11-20 09:30:16.634597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.267 [2024-11-20 09:30:16.634636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:51.267 [2024-11-20 09:30:16.634656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.106 ms 00:17:51.267 [2024-11-20 09:30:16.634665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.267 [2024-11-20 09:30:16.634723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:51.267 [2024-11-20 09:30:16.634738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.634998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:51.267 [2024-11-20 09:30:16.635354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:51.268 [2024-11-20 09:30:16.635615] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:51.268 [2024-11-20 09:30:16.635626] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:17:51.268 [2024-11-20 09:30:16.635634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:51.268 [2024-11-20 09:30:16.635643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:51.268 [2024-11-20 09:30:16.635649] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:51.268 [2024-11-20 09:30:16.635658] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:51.268 [2024-11-20 09:30:16.635667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:51.268 [2024-11-20 09:30:16.635676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:51.268 [2024-11-20 09:30:16.635683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:51.268 [2024-11-20 09:30:16.635691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:51.268 [2024-11-20 09:30:16.635697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:51.268 [2024-11-20 09:30:16.635705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.268 [2024-11-20 09:30:16.635713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:51.268 [2024-11-20 09:30:16.635723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:17:51.268 [2024-11-20 09:30:16.635730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.268 [2024-11-20 09:30:16.648401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.268 [2024-11-20 09:30:16.648438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:51.268 [2024-11-20 09:30:16.648456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.637 ms 00:17:51.268 [2024-11-20 09:30:16.648464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.268 [2024-11-20 09:30:16.648850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.268 [2024-11-20 09:30:16.648870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:51.268 [2024-11-20 09:30:16.648882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:51.268 [2024-11-20 09:30:16.648889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.268 [2024-11-20 09:30:16.692720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.268 [2024-11-20 09:30:16.692776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.268 [2024-11-20 09:30:16.692789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.268 [2024-11-20 09:30:16.692796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.268 [2024-11-20 09:30:16.692910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.268 [2024-11-20 09:30:16.692920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.268 [2024-11-20 09:30:16.692930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.268 [2024-11-20 09:30:16.692937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.268 [2024-11-20 09:30:16.693004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.268 [2024-11-20 09:30:16.693013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.268 [2024-11-20 09:30:16.693027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.268 [2024-11-20 09:30:16.693034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.268 [2024-11-20 09:30:16.693062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.268 [2024-11-20 09:30:16.693069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.268 [2024-11-20 09:30:16.693079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.268 [2024-11-20 09:30:16.693086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.774696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.774748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.527 [2024-11-20 09:30:16.774761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.774769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.838103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.527 [2024-11-20 09:30:16.838117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.838125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.838209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.527 [2024-11-20 09:30:16.838233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.838243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.838323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.527 [2024-11-20 09:30:16.838333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.838340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.838452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.527 [2024-11-20 09:30:16.838462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.838470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.838545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:51.527 [2024-11-20 09:30:16.838554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.838561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.527 [2024-11-20 09:30:16.838623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.527 [2024-11-20 09:30:16.838633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.527 [2024-11-20 09:30:16.838641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.527 [2024-11-20 09:30:16.838699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.528 [2024-11-20 09:30:16.838708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.528 [2024-11-20 09:30:16.838717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.528 [2024-11-20 09:30:16.838724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.528 [2024-11-20 09:30:16.838896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.515 ms, result 0 00:17:51.528 true 00:17:51.528 09:30:16 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73786 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73786 ']' 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73786 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73786 00:17:51.528 killing process with pid 73786 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73786' 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73786 00:17:51.528 09:30:16 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73786 00:17:58.090 09:30:22 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:58.656 65536+0 records in 00:17:58.656 65536+0 records out 00:17:58.656 268435456 bytes (268 MB, 256 MiB) copied, 1.07037 s, 251 MB/s 00:17:58.656 09:30:23 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:58.656 [2024-11-20 09:30:24.051242] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:58.656 [2024-11-20 09:30:24.051531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73968 ] 00:17:58.914 [2024-11-20 09:30:24.208622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.914 [2024-11-20 09:30:24.316791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.171 [2024-11-20 09:30:24.577290] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.171 [2024-11-20 09:30:24.577360] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.430 [2024-11-20 09:30:24.735182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.430 [2024-11-20 09:30:24.735236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:59.430 [2024-11-20 09:30:24.735250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.430 [2024-11-20 09:30:24.735259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.430 [2024-11-20 09:30:24.737895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.430 [2024-11-20 09:30:24.737930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.430 [2024-11-20 09:30:24.737940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:17:59.430 [2024-11-20 09:30:24.737947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.738015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:59.431 [2024-11-20 09:30:24.738996] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:59.431 [2024-11-20 09:30:24.739041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.739053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.431 [2024-11-20 09:30:24.739062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:17:59.431 [2024-11-20 09:30:24.739070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.740251] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:59.431 [2024-11-20 09:30:24.752420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.752455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:59.431 [2024-11-20 09:30:24.752467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.171 ms 00:17:59.431 [2024-11-20 09:30:24.752475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.752561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.752573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:59.431 [2024-11-20 09:30:24.752582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:59.431 [2024-11-20 09:30:24.752590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.757507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.757653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.431 [2024-11-20 09:30:24.757668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.877 ms 00:17:59.431 [2024-11-20 09:30:24.757676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.757763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.757772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.431 [2024-11-20 09:30:24.757781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:59.431 [2024-11-20 09:30:24.757788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.757812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.757823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:59.431 [2024-11-20 09:30:24.757830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:59.431 [2024-11-20 09:30:24.757837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.757857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:59.431 [2024-11-20 09:30:24.761207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.761323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.431 [2024-11-20 09:30:24.761338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.355 ms 00:17:59.431 [2024-11-20 09:30:24.761347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.761384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.761392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:59.431 [2024-11-20 09:30:24.761400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:59.431 [2024-11-20 09:30:24.761407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.761424] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:59.431 [2024-11-20 09:30:24.761444] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:59.431 [2024-11-20 09:30:24.761479] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:59.431 [2024-11-20 09:30:24.761494] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:59.431 [2024-11-20 09:30:24.761595] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:59.431 [2024-11-20 09:30:24.761606] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:59.431 [2024-11-20 09:30:24.761617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:59.431 [2024-11-20 09:30:24.761626] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:59.431 [2024-11-20 09:30:24.761645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:59.431 [2024-11-20 09:30:24.761653] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:59.431 [2024-11-20 09:30:24.761660] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:59.431 [2024-11-20 09:30:24.761667] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:59.431 [2024-11-20 09:30:24.761674] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:59.431 [2024-11-20 09:30:24.761682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.761689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:59.431 [2024-11-20 09:30:24.761696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:59.431 [2024-11-20 09:30:24.761703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.761802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.431 [2024-11-20 09:30:24.761812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:59.431 [2024-11-20 09:30:24.761821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:59.431 [2024-11-20 09:30:24.761828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.431 [2024-11-20 09:30:24.761930] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:59.431 [2024-11-20 09:30:24.761940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:59.431 [2024-11-20 09:30:24.761948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.431 [2024-11-20 09:30:24.761955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.431 [2024-11-20 09:30:24.761963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:59.431 [2024-11-20 09:30:24.761970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:59.431 [2024-11-20 09:30:24.761977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:59.431 [2024-11-20 09:30:24.761984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:59.431 [2024-11-20 09:30:24.761992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:59.431 [2024-11-20 09:30:24.761998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.431 [2024-11-20 09:30:24.762005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:59.431 [2024-11-20 09:30:24.762011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:59.431 [2024-11-20 09:30:24.762017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.431 [2024-11-20 09:30:24.762030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:59.431 [2024-11-20 09:30:24.762036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:59.431 [2024-11-20 09:30:24.762043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.431 [2024-11-20 09:30:24.762053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:59.431 [2024-11-20 09:30:24.762060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:59.431 [2024-11-20 09:30:24.762066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.431 [2024-11-20 09:30:24.762072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:59.431 [2024-11-20 09:30:24.762079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:59.431 [2024-11-20 09:30:24.762085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.431 [2024-11-20 09:30:24.762092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:59.431 [2024-11-20 09:30:24.762098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:59.431 [2024-11-20 09:30:24.762104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.431 [2024-11-20 09:30:24.762111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:59.431 [2024-11-20 09:30:24.762117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:59.431 [2024-11-20 09:30:24.762123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.431 [2024-11-20 09:30:24.762129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:59.431 [2024-11-20 09:30:24.762136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:59.431 [2024-11-20 09:30:24.762142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.431 [2024-11-20 09:30:24.762149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:59.432 [2024-11-20 09:30:24.762155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:59.432 [2024-11-20 09:30:24.762161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.432 [2024-11-20 09:30:24.762168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:59.432 [2024-11-20 09:30:24.762174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:59.432 [2024-11-20 09:30:24.762180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.432 [2024-11-20 09:30:24.762187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:59.432 [2024-11-20 09:30:24.762193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:59.432 [2024-11-20 09:30:24.762199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.432 [2024-11-20 09:30:24.762206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:59.432 [2024-11-20 09:30:24.762212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:59.432 [2024-11-20 09:30:24.762218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.432 [2024-11-20 09:30:24.762225] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:59.432 [2024-11-20 09:30:24.762232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:59.432 [2024-11-20 09:30:24.762239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.432 [2024-11-20 09:30:24.762248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.432 [2024-11-20 09:30:24.762255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:59.432 [2024-11-20 09:30:24.762264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:59.432 [2024-11-20 09:30:24.762270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:59.432 [2024-11-20 09:30:24.762277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:59.432 [2024-11-20 09:30:24.762283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:59.432 [2024-11-20 09:30:24.762290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:59.432 [2024-11-20 09:30:24.762308] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:59.432 [2024-11-20 09:30:24.762318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:59.432 [2024-11-20 09:30:24.762333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:59.432 [2024-11-20 09:30:24.762340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:59.432 [2024-11-20 09:30:24.762347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:59.432 [2024-11-20 09:30:24.762354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:59.432 [2024-11-20 09:30:24.762360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:59.432 [2024-11-20 09:30:24.762368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:59.432 [2024-11-20 09:30:24.762374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:59.432 [2024-11-20 09:30:24.762381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:59.432 [2024-11-20 09:30:24.762388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:59.432 [2024-11-20 09:30:24.762423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:59.432 [2024-11-20 09:30:24.762431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:59.432 [2024-11-20 09:30:24.762446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:59.432 [2024-11-20 09:30:24.762453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:59.432 [2024-11-20 09:30:24.762460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:59.432 [2024-11-20 09:30:24.762467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.762474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:59.432 [2024-11-20 09:30:24.762484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:17:59.432 [2024-11-20 09:30:24.762499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.788827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.788981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.432 [2024-11-20 09:30:24.789045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.265 ms 00:17:59.432 [2024-11-20 09:30:24.789068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.789220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.789249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.432 [2024-11-20 09:30:24.789329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:59.432 [2024-11-20 09:30:24.789353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.833486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.833705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.432 [2024-11-20 09:30:24.833775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.096 ms 00:17:59.432 [2024-11-20 09:30:24.833804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.833940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.833968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.432 [2024-11-20 09:30:24.833989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.432 [2024-11-20 09:30:24.834007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.834465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.834518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.432 [2024-11-20 09:30:24.834540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:59.432 [2024-11-20 09:30:24.834565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.834708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.834731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.432 [2024-11-20 09:30:24.834751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:59.432 [2024-11-20 09:30:24.834769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.848400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.848523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.432 [2024-11-20 09:30:24.848572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.557 ms 00:17:59.432 [2024-11-20 09:30:24.848593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.432 [2024-11-20 09:30:24.861446] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:59.432 [2024-11-20 09:30:24.861579] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:59.432 [2024-11-20 09:30:24.861637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.432 [2024-11-20 09:30:24.861657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:59.432 [2024-11-20 09:30:24.861677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:17:59.432 [2024-11-20 09:30:24.861695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:24.895459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:24.895657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:59.690 [2024-11-20 09:30:24.895729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.673 ms 00:17:59.690 [2024-11-20 09:30:24.895752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:24.908205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:24.908379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:59.690 [2024-11-20 09:30:24.908482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.337 ms 00:17:59.690 [2024-11-20 09:30:24.908510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:24.920582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:24.920717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:59.690 [2024-11-20 09:30:24.920767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.969 ms 00:17:59.690 [2024-11-20 09:30:24.920788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:24.921696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:24.921763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.690 [2024-11-20 09:30:24.921839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:59.690 [2024-11-20 09:30:24.921863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:24.978887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:24.979096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:59.690 [2024-11-20 09:30:24.979152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.982 ms 00:17:59.690 [2024-11-20 09:30:24.979175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:24.990051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:59.690 [2024-11-20 09:30:25.004781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:25.004933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:59.690 [2024-11-20 09:30:25.004984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.495 ms 00:17:59.690 [2024-11-20 09:30:25.005006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.690 [2024-11-20 09:30:25.005112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.690 [2024-11-20 09:30:25.005141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:59.690 [2024-11-20 09:30:25.005161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:59.691 [2024-11-20 09:30:25.005180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.691 [2024-11-20 09:30:25.005240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.691 [2024-11-20 09:30:25.005344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:59.691 [2024-11-20 09:30:25.005380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:59.691 [2024-11-20 09:30:25.005408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.691 [2024-11-20 09:30:25.005471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.691 [2024-11-20 09:30:25.005507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.691 [2024-11-20 09:30:25.005542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:59.691 [2024-11-20 09:30:25.005572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.691 [2024-11-20 09:30:25.005657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:59.691 [2024-11-20 09:30:25.005685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.691 [2024-11-20 09:30:25.005730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:59.691 [2024-11-20 09:30:25.005753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:59.691 [2024-11-20 09:30:25.005789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.691 [2024-11-20 09:30:25.030983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.691 [2024-11-20 09:30:25.031102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.691 [2024-11-20 09:30:25.031151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.155 ms 00:17:59.691 [2024-11-20 09:30:25.031173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.691 [2024-11-20 09:30:25.031362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.691 [2024-11-20 09:30:25.031401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.691 [2024-11-20 09:30:25.031422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:59.691 [2024-11-20 09:30:25.031440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.691 [2024-11-20 09:30:25.032286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:59.691 [2024-11-20 09:30:25.035588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.797 ms, result 0 00:17:59.691 [2024-11-20 09:30:25.036895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.691 [2024-11-20 09:30:25.051370] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.624  [2024-11-20T09:30:27.452Z] Copying: 16/256 [MB] (16 MBps) [2024-11-20T09:30:28.387Z] Copying: 44/256 [MB] (27 MBps) [2024-11-20T09:30:29.318Z] Copying: 61/256 [MB] (17 MBps) [2024-11-20T09:30:30.250Z] Copying: 80/256 [MB] (18 MBps) [2024-11-20T09:30:31.181Z] Copying: 101/256 [MB] (21 MBps) [2024-11-20T09:30:32.115Z] Copying: 143/256 [MB] (42 MBps) [2024-11-20T09:30:33.488Z] Copying: 186/256 [MB] (42 MBps) [2024-11-20T09:30:34.055Z] Copying: 225/256 [MB] (39 MBps) [2024-11-20T09:30:34.055Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 09:30:33.766200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.599 [2024-11-20 09:30:33.775452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.775586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:08.599 [2024-11-20 09:30:33.775686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.599 [2024-11-20 09:30:33.775710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.775746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:08.599 [2024-11-20 09:30:33.778377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.778486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:08.599 [2024-11-20 09:30:33.778659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:18:08.599 [2024-11-20 09:30:33.778681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.780407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.780515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:08.599 [2024-11-20 09:30:33.780574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:18:08.599 [2024-11-20 09:30:33.780596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.786931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.787041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:08.599 [2024-11-20 09:30:33.787104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.305 ms 00:18:08.599 [2024-11-20 09:30:33.787126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.794312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.794417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:08.599 [2024-11-20 09:30:33.794465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.100 ms 00:18:08.599 [2024-11-20 09:30:33.794487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.818091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.818211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:08.599 [2024-11-20 09:30:33.818261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.531 ms 00:18:08.599 [2024-11-20 09:30:33.818282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.832484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.832622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:08.599 [2024-11-20 09:30:33.832648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.137 ms 00:18:08.599 [2024-11-20 09:30:33.832659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.832793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.832803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:08.599 [2024-11-20 09:30:33.832812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:08.599 [2024-11-20 09:30:33.832819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.857308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.857346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:08.599 [2024-11-20 09:30:33.857359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.472 ms 00:18:08.599 [2024-11-20 09:30:33.857368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.881094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.881131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:08.599 [2024-11-20 09:30:33.881144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.687 ms 00:18:08.599 [2024-11-20 09:30:33.881151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.904123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.904245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:08.599 [2024-11-20 09:30:33.904294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.935 ms 00:18:08.599 [2024-11-20 09:30:33.904314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.927145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.599 [2024-11-20 09:30:33.927272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:08.599 [2024-11-20 09:30:33.927288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.769 ms 00:18:08.599 [2024-11-20 09:30:33.927295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.599 [2024-11-20 09:30:33.927347] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:08.599 [2024-11-20 09:30:33.927366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:08.599 [2024-11-20 09:30:33.927690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.927997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:08.600 [2024-11-20 09:30:33.928122] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:08.600 [2024-11-20 09:30:33.928130] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:18:08.600 [2024-11-20 09:30:33.928138] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:08.600 [2024-11-20 09:30:33.928145] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:08.600 [2024-11-20 09:30:33.928152] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:08.600 [2024-11-20 09:30:33.928160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:08.600 [2024-11-20 09:30:33.928167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:08.600 [2024-11-20 09:30:33.928175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:08.600 [2024-11-20 09:30:33.928182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:08.600 [2024-11-20 09:30:33.928188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:08.600 [2024-11-20 09:30:33.928195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:08.600 [2024-11-20 09:30:33.928202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.600 [2024-11-20 09:30:33.928209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:08.600 [2024-11-20 09:30:33.928219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:18:08.600 [2024-11-20 09:30:33.928226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.600 [2024-11-20 09:30:33.940985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.600 [2024-11-20 09:30:33.941021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:08.600 [2024-11-20 09:30:33.941032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.727 ms 00:18:08.600 [2024-11-20 09:30:33.941040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.600 [2024-11-20 09:30:33.941425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.600 [2024-11-20 09:30:33.941473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:08.600 [2024-11-20 09:30:33.941485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:18:08.600 [2024-11-20 09:30:33.941493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.600 [2024-11-20 09:30:33.976485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.600 [2024-11-20 09:30:33.976637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.600 [2024-11-20 09:30:33.976653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.600 [2024-11-20 09:30:33.976660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.600 [2024-11-20 09:30:33.976738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.600 [2024-11-20 09:30:33.976751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.600 [2024-11-20 09:30:33.976758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.600 [2024-11-20 09:30:33.976765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.600 [2024-11-20 09:30:33.976812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.600 [2024-11-20 09:30:33.976821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.600 [2024-11-20 09:30:33.976829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.600 [2024-11-20 09:30:33.976836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.600 [2024-11-20 09:30:33.976852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.600 [2024-11-20 09:30:33.976860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.600 [2024-11-20 09:30:33.976870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.600 [2024-11-20 09:30:33.976877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.053214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.053261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.859 [2024-11-20 09:30:34.053272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.053280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.859 [2024-11-20 09:30:34.115552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.859 [2024-11-20 09:30:34.115635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.859 [2024-11-20 09:30:34.115685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.859 [2024-11-20 09:30:34.115807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:08.859 [2024-11-20 09:30:34.115861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.859 [2024-11-20 09:30:34.115924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.115973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.859 [2024-11-20 09:30:34.115982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.859 [2024-11-20 09:30:34.115989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.859 [2024-11-20 09:30:34.115999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.859 [2024-11-20 09:30:34.116129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.672 ms, result 0 00:18:09.891 00:18:09.891 00:18:09.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.891 09:30:34 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74087 00:18:09.891 09:30:34 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74087 00:18:09.891 09:30:34 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:09.891 09:30:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74087 ']' 00:18:09.891 09:30:34 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.891 09:30:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.891 09:30:34 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.891 09:30:34 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.891 09:30:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:09.891 [2024-11-20 09:30:35.062467] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:09.891 [2024-11-20 09:30:35.062596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74087 ] 00:18:09.891 [2024-11-20 09:30:35.221799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.891 [2024-11-20 09:30:35.320194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.518 09:30:35 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.518 09:30:35 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:10.518 09:30:35 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:10.776 [2024-11-20 09:30:36.106550] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.776 [2024-11-20 09:30:36.106615] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:11.035 [2024-11-20 09:30:36.277089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.277140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:11.036 [2024-11-20 09:30:36.277155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.036 [2024-11-20 09:30:36.277164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.279800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.279834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.036 [2024-11-20 09:30:36.279845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:18:11.036 [2024-11-20 09:30:36.279853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.279927] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:11.036 [2024-11-20 09:30:36.280637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:11.036 [2024-11-20 09:30:36.280663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.280671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.036 [2024-11-20 09:30:36.280681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:11.036 [2024-11-20 09:30:36.280689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.281760] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:11.036 [2024-11-20 09:30:36.294124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.294162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:11.036 [2024-11-20 09:30:36.294174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.369 ms 00:18:11.036 [2024-11-20 09:30:36.294184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.294269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.294281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:11.036 [2024-11-20 09:30:36.294290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:11.036 [2024-11-20 09:30:36.294315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.299126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.299161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.036 [2024-11-20 09:30:36.299171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:18:11.036 [2024-11-20 09:30:36.299180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.299279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.299291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.036 [2024-11-20 09:30:36.299312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:11.036 [2024-11-20 09:30:36.299321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.299353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.299363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:11.036 [2024-11-20 09:30:36.299371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:11.036 [2024-11-20 09:30:36.299379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.299403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:11.036 [2024-11-20 09:30:36.302650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.302676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.036 [2024-11-20 09:30:36.302687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:18:11.036 [2024-11-20 09:30:36.302695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.302730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.302738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:11.036 [2024-11-20 09:30:36.302748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:11.036 [2024-11-20 09:30:36.302757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.302778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:11.036 [2024-11-20 09:30:36.302795] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:11.036 [2024-11-20 09:30:36.302833] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:11.036 [2024-11-20 09:30:36.302848] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:11.036 [2024-11-20 09:30:36.302951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:11.036 [2024-11-20 09:30:36.302961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:11.036 [2024-11-20 09:30:36.302975] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:11.036 [2024-11-20 09:30:36.302987] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:11.036 [2024-11-20 09:30:36.302997] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303005] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:11.036 [2024-11-20 09:30:36.303014] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:11.036 [2024-11-20 09:30:36.303021] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:11.036 [2024-11-20 09:30:36.303031] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:11.036 [2024-11-20 09:30:36.303038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.303047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:11.036 [2024-11-20 09:30:36.303054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:18:11.036 [2024-11-20 09:30:36.303063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.303151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.036 [2024-11-20 09:30:36.303160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:11.036 [2024-11-20 09:30:36.303167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:11.036 [2024-11-20 09:30:36.303175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.036 [2024-11-20 09:30:36.303285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:11.036 [2024-11-20 09:30:36.303322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:11.036 [2024-11-20 09:30:36.303332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:11.036 [2024-11-20 09:30:36.303356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:11.036 [2024-11-20 09:30:36.303382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.036 [2024-11-20 09:30:36.303397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:11.036 [2024-11-20 09:30:36.303405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:11.036 [2024-11-20 09:30:36.303411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.036 [2024-11-20 09:30:36.303419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:11.036 [2024-11-20 09:30:36.303426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:11.036 [2024-11-20 09:30:36.303434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:11.036 [2024-11-20 09:30:36.303448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:11.036 [2024-11-20 09:30:36.303474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:11.036 [2024-11-20 09:30:36.303498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:11.036 [2024-11-20 09:30:36.303519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:11.036 [2024-11-20 09:30:36.303543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.036 [2024-11-20 09:30:36.303557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:11.036 [2024-11-20 09:30:36.303563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:11.036 [2024-11-20 09:30:36.303572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.036 [2024-11-20 09:30:36.303579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:11.036 [2024-11-20 09:30:36.303587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:11.036 [2024-11-20 09:30:36.303593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.036 [2024-11-20 09:30:36.303601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:11.036 [2024-11-20 09:30:36.303607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:11.036 [2024-11-20 09:30:36.303617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.037 [2024-11-20 09:30:36.303623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:11.037 [2024-11-20 09:30:36.303631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:11.037 [2024-11-20 09:30:36.303638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.037 [2024-11-20 09:30:36.303645] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:11.037 [2024-11-20 09:30:36.303653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:11.037 [2024-11-20 09:30:36.303663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.037 [2024-11-20 09:30:36.303670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.037 [2024-11-20 09:30:36.303679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:11.037 [2024-11-20 09:30:36.303685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:11.037 [2024-11-20 09:30:36.303693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:11.037 [2024-11-20 09:30:36.303700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:11.037 [2024-11-20 09:30:36.303708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:11.037 [2024-11-20 09:30:36.303714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:11.037 [2024-11-20 09:30:36.303723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:11.037 [2024-11-20 09:30:36.303732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:11.037 [2024-11-20 09:30:36.303750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:11.037 [2024-11-20 09:30:36.303760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:11.037 [2024-11-20 09:30:36.303767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:11.037 [2024-11-20 09:30:36.303777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:11.037 [2024-11-20 09:30:36.303784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:11.037 [2024-11-20 09:30:36.303793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:11.037 [2024-11-20 09:30:36.303800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:11.037 [2024-11-20 09:30:36.303808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:11.037 [2024-11-20 09:30:36.303815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:11.037 [2024-11-20 09:30:36.303854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:11.037 [2024-11-20 09:30:36.303863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:11.037 [2024-11-20 09:30:36.303881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:11.037 [2024-11-20 09:30:36.303890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:11.037 [2024-11-20 09:30:36.303897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:11.037 [2024-11-20 09:30:36.303906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.303913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:11.037 [2024-11-20 09:30:36.303922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:18:11.037 [2024-11-20 09:30:36.303929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.329327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.329360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.037 [2024-11-20 09:30:36.329372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.339 ms 00:18:11.037 [2024-11-20 09:30:36.329381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.329505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.329515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.037 [2024-11-20 09:30:36.329525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:11.037 [2024-11-20 09:30:36.329532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.359714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.359752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.037 [2024-11-20 09:30:36.359768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.158 ms 00:18:11.037 [2024-11-20 09:30:36.359777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.359850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.359860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.037 [2024-11-20 09:30:36.359870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:11.037 [2024-11-20 09:30:36.359877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.360199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.360221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.037 [2024-11-20 09:30:36.360232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:18:11.037 [2024-11-20 09:30:36.360241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.360381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.360390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.037 [2024-11-20 09:30:36.360399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:11.037 [2024-11-20 09:30:36.360406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.374539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.374568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.037 [2024-11-20 09:30:36.374580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.108 ms 00:18:11.037 [2024-11-20 09:30:36.374588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.386706] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:11.037 [2024-11-20 09:30:36.386740] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:11.037 [2024-11-20 09:30:36.386756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.386764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:11.037 [2024-11-20 09:30:36.386776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.044 ms 00:18:11.037 [2024-11-20 09:30:36.386783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.410871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.410912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:11.037 [2024-11-20 09:30:36.410927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.007 ms 00:18:11.037 [2024-11-20 09:30:36.410937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.422768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.422804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:11.037 [2024-11-20 09:30:36.422819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.747 ms 00:18:11.037 [2024-11-20 09:30:36.422828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.434182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.434212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:11.037 [2024-11-20 09:30:36.434225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.277 ms 00:18:11.037 [2024-11-20 09:30:36.434232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.037 [2024-11-20 09:30:36.434886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.037 [2024-11-20 09:30:36.434912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.037 [2024-11-20 09:30:36.434923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:18:11.037 [2024-11-20 09:30:36.434930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.502073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.502133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:11.296 [2024-11-20 09:30:36.502150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.115 ms 00:18:11.296 [2024-11-20 09:30:36.502159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.512659] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:11.296 [2024-11-20 09:30:36.526715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.526902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.296 [2024-11-20 09:30:36.526923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.434 ms 00:18:11.296 [2024-11-20 09:30:36.526933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.527025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.527037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:11.296 [2024-11-20 09:30:36.527046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:11.296 [2024-11-20 09:30:36.527054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.527099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.527110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.296 [2024-11-20 09:30:36.527118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:11.296 [2024-11-20 09:30:36.527128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.527152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.527162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.296 [2024-11-20 09:30:36.527170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.296 [2024-11-20 09:30:36.527181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.527211] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:11.296 [2024-11-20 09:30:36.527223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.527230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:11.296 [2024-11-20 09:30:36.527242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:11.296 [2024-11-20 09:30:36.527248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.550339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.550376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.296 [2024-11-20 09:30:36.550390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.062 ms 00:18:11.296 [2024-11-20 09:30:36.550398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.550509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.550521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.296 [2024-11-20 09:30:36.550531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:11.296 [2024-11-20 09:30:36.550541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.551814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.296 [2024-11-20 09:30:36.555055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.446 ms, result 0 00:18:11.296 [2024-11-20 09:30:36.556114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.296 Some configs were skipped because the RPC state that can call them passed over. 00:18:11.296 09:30:36 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:11.296 [2024-11-20 09:30:36.730649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.296 [2024-11-20 09:30:36.730707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:11.296 [2024-11-20 09:30:36.730720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:18:11.296 [2024-11-20 09:30:36.730730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.296 [2024-11-20 09:30:36.730764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.603 ms, result 0 00:18:11.296 true 00:18:11.296 09:30:36 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:11.554 [2024-11-20 09:30:36.934528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.554 [2024-11-20 09:30:36.934579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:11.554 [2024-11-20 09:30:36.934592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:18:11.554 [2024-11-20 09:30:36.934600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.554 [2024-11-20 09:30:36.934635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.218 ms, result 0 00:18:11.554 true 00:18:11.554 09:30:36 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74087 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74087 ']' 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74087 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74087 00:18:11.554 killing process with pid 74087 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74087' 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74087 00:18:11.554 09:30:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74087 00:18:12.488 [2024-11-20 09:30:37.666649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.666699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:12.488 [2024-11-20 09:30:37.666713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.488 [2024-11-20 09:30:37.666722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.666744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:12.488 [2024-11-20 09:30:37.669324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.669358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:12.488 [2024-11-20 09:30:37.669373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:18:12.488 [2024-11-20 09:30:37.669382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.669677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.669736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:12.488 [2024-11-20 09:30:37.669751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:18:12.488 [2024-11-20 09:30:37.669758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.673775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.673804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:12.488 [2024-11-20 09:30:37.673817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:18:12.488 [2024-11-20 09:30:37.673824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.680765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.680898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:12.488 [2024-11-20 09:30:37.680917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:18:12.488 [2024-11-20 09:30:37.680925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.690339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.690371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:12.488 [2024-11-20 09:30:37.690385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.358 ms 00:18:12.488 [2024-11-20 09:30:37.690399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.697985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.698043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:12.488 [2024-11-20 09:30:37.698066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.546 ms 00:18:12.488 [2024-11-20 09:30:37.698079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.698236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.698248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:12.488 [2024-11-20 09:30:37.698258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:12.488 [2024-11-20 09:30:37.698270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.707892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.708028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:12.488 [2024-11-20 09:30:37.708046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.599 ms 00:18:12.488 [2024-11-20 09:30:37.708055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.717493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.717524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:12.488 [2024-11-20 09:30:37.717538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.401 ms 00:18:12.488 [2024-11-20 09:30:37.717546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.726463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.726664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:12.488 [2024-11-20 09:30:37.726683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.874 ms 00:18:12.488 [2024-11-20 09:30:37.726690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.735666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.488 [2024-11-20 09:30:37.735696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:12.488 [2024-11-20 09:30:37.735707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.912 ms 00:18:12.488 [2024-11-20 09:30:37.735715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.488 [2024-11-20 09:30:37.735765] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:12.488 [2024-11-20 09:30:37.735780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:12.488 [2024-11-20 09:30:37.735978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.735985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.735994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:12.489 [2024-11-20 09:30:37.736637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:12.489 [2024-11-20 09:30:37.736650] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:18:12.489 [2024-11-20 09:30:37.736665] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:12.489 [2024-11-20 09:30:37.736676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:12.489 [2024-11-20 09:30:37.736683] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:12.489 [2024-11-20 09:30:37.736692] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:12.489 [2024-11-20 09:30:37.736698] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:12.489 [2024-11-20 09:30:37.736707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:12.489 [2024-11-20 09:30:37.736714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:12.489 [2024-11-20 09:30:37.736722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:12.489 [2024-11-20 09:30:37.736729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:12.489 [2024-11-20 09:30:37.736737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.490 [2024-11-20 09:30:37.736744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:12.490 [2024-11-20 09:30:37.736753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:18:12.490 [2024-11-20 09:30:37.736760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.749336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.490 [2024-11-20 09:30:37.749461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:12.490 [2024-11-20 09:30:37.749482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.551 ms 00:18:12.490 [2024-11-20 09:30:37.749490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.749858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.490 [2024-11-20 09:30:37.749875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:12.490 [2024-11-20 09:30:37.749885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:12.490 [2024-11-20 09:30:37.749894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.786415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.786459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.490 [2024-11-20 09:30:37.786471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.786477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.786592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.786601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.490 [2024-11-20 09:30:37.786609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.786618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.786658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.786666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.490 [2024-11-20 09:30:37.786675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.786681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.786697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.786703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.490 [2024-11-20 09:30:37.786711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.786716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.847014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.847056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.490 [2024-11-20 09:30:37.847068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.847074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.895222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.895265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.490 [2024-11-20 09:30:37.895276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.895284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.896267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.490 [2024-11-20 09:30:37.896279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.896284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.896327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.490 [2024-11-20 09:30:37.896334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.896340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.896429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.490 [2024-11-20 09:30:37.896437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.896442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.896475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:12.490 [2024-11-20 09:30:37.896482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.896488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.896525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.490 [2024-11-20 09:30:37.896533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.896539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.490 [2024-11-20 09:30:37.896579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.490 [2024-11-20 09:30:37.896587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.490 [2024-11-20 09:30:37.896592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.490 [2024-11-20 09:30:37.896696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 230.035 ms, result 0 00:18:13.054 09:30:38 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:13.054 09:30:38 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:13.054 [2024-11-20 09:30:38.477704] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:13.054 [2024-11-20 09:30:38.477821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74134 ] 00:18:13.311 [2024-11-20 09:30:38.633103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.311 [2024-11-20 09:30:38.714247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.567 [2024-11-20 09:30:38.926202] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.567 [2024-11-20 09:30:38.926260] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.826 [2024-11-20 09:30:39.078511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.078566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.826 [2024-11-20 09:30:39.078580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:13.826 [2024-11-20 09:30:39.078588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.081251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.081437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.826 [2024-11-20 09:30:39.081454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.645 ms 00:18:13.826 [2024-11-20 09:30:39.081461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.081548] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.826 [2024-11-20 09:30:39.082270] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.826 [2024-11-20 09:30:39.082292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.082312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.826 [2024-11-20 09:30:39.082322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:18:13.826 [2024-11-20 09:30:39.082329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.083444] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:13.826 [2024-11-20 09:30:39.095763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.095800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:13.826 [2024-11-20 09:30:39.095813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.320 ms 00:18:13.826 [2024-11-20 09:30:39.095821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.095902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.095912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:13.826 [2024-11-20 09:30:39.095921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:13.826 [2024-11-20 09:30:39.095928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.100656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.100787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.826 [2024-11-20 09:30:39.100802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.687 ms 00:18:13.826 [2024-11-20 09:30:39.100810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.100903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.100913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.826 [2024-11-20 09:30:39.100922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:13.826 [2024-11-20 09:30:39.100929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.100953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.100964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.826 [2024-11-20 09:30:39.100971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.826 [2024-11-20 09:30:39.100978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.100997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:13.826 [2024-11-20 09:30:39.104228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.104380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.826 [2024-11-20 09:30:39.104397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:18:13.826 [2024-11-20 09:30:39.104405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.104440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.104449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.826 [2024-11-20 09:30:39.104457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:13.826 [2024-11-20 09:30:39.104464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.104481] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:13.826 [2024-11-20 09:30:39.104503] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:13.826 [2024-11-20 09:30:39.104536] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:13.826 [2024-11-20 09:30:39.104551] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:13.826 [2024-11-20 09:30:39.104651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:13.826 [2024-11-20 09:30:39.104662] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.826 [2024-11-20 09:30:39.104672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:13.826 [2024-11-20 09:30:39.104681] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.826 [2024-11-20 09:30:39.104692] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.826 [2024-11-20 09:30:39.104700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:13.826 [2024-11-20 09:30:39.104707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.826 [2024-11-20 09:30:39.104714] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:13.826 [2024-11-20 09:30:39.104721] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:13.826 [2024-11-20 09:30:39.104729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.104736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.826 [2024-11-20 09:30:39.104743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:13.826 [2024-11-20 09:30:39.104751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.104837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.826 [2024-11-20 09:30:39.104845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.826 [2024-11-20 09:30:39.104855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:13.826 [2024-11-20 09:30:39.104863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.826 [2024-11-20 09:30:39.104975] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.826 [2024-11-20 09:30:39.104985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.826 [2024-11-20 09:30:39.104994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.826 [2024-11-20 09:30:39.105002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.826 [2024-11-20 09:30:39.105011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.826 [2024-11-20 09:30:39.105017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.826 [2024-11-20 09:30:39.105024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.827 [2024-11-20 09:30:39.105039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.827 [2024-11-20 09:30:39.105052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.827 [2024-11-20 09:30:39.105058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:13.827 [2024-11-20 09:30:39.105065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.827 [2024-11-20 09:30:39.105078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.827 [2024-11-20 09:30:39.105085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:13.827 [2024-11-20 09:30:39.105091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.827 [2024-11-20 09:30:39.105105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.827 [2024-11-20 09:30:39.105128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.827 [2024-11-20 09:30:39.105147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.827 [2024-11-20 09:30:39.105167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.827 [2024-11-20 09:30:39.105187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.827 [2024-11-20 09:30:39.105207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.827 [2024-11-20 09:30:39.105220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.827 [2024-11-20 09:30:39.105226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:13.827 [2024-11-20 09:30:39.105233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.827 [2024-11-20 09:30:39.105240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:13.827 [2024-11-20 09:30:39.105246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:13.827 [2024-11-20 09:30:39.105253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:13.827 [2024-11-20 09:30:39.105266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:13.827 [2024-11-20 09:30:39.105273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105280] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.827 [2024-11-20 09:30:39.105287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.827 [2024-11-20 09:30:39.105295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.827 [2024-11-20 09:30:39.105324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.827 [2024-11-20 09:30:39.105331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.827 [2024-11-20 09:30:39.105337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.827 [2024-11-20 09:30:39.105345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.827 [2024-11-20 09:30:39.105352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.827 [2024-11-20 09:30:39.105359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.827 [2024-11-20 09:30:39.105367] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.827 [2024-11-20 09:30:39.105376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.827 [2024-11-20 09:30:39.105384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:13.827 [2024-11-20 09:30:39.105392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:13.827 [2024-11-20 09:30:39.105400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:13.827 [2024-11-20 09:30:39.105407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:13.827 [2024-11-20 09:30:39.105414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:13.827 [2024-11-20 09:30:39.105421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:13.827 [2024-11-20 09:30:39.105428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:13.827 [2024-11-20 09:30:39.105435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:13.827 [2024-11-20 09:30:39.105443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:13.827 [2024-11-20 09:30:39.105450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:13.827 [2024-11-20 09:30:39.105457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:13.827 [2024-11-20 09:30:39.105464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:13.828 [2024-11-20 09:30:39.105471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:13.828 [2024-11-20 09:30:39.105480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:13.828 [2024-11-20 09:30:39.105487] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.828 [2024-11-20 09:30:39.105495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.828 [2024-11-20 09:30:39.105504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.828 [2024-11-20 09:30:39.105511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.828 [2024-11-20 09:30:39.105518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.828 [2024-11-20 09:30:39.105525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.828 [2024-11-20 09:30:39.105532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.105540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.828 [2024-11-20 09:30:39.105550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:18:13.828 [2024-11-20 09:30:39.105556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.131757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.131915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.828 [2024-11-20 09:30:39.131973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.151 ms 00:18:13.828 [2024-11-20 09:30:39.131997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.132162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.132195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.828 [2024-11-20 09:30:39.132262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:13.828 [2024-11-20 09:30:39.132285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.179234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.179423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.828 [2024-11-20 09:30:39.179486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.883 ms 00:18:13.828 [2024-11-20 09:30:39.179515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.179633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.179661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.828 [2024-11-20 09:30:39.179682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:13.828 [2024-11-20 09:30:39.179701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.180034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.180071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.828 [2024-11-20 09:30:39.180236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:13.828 [2024-11-20 09:30:39.180273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.180427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.180454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.828 [2024-11-20 09:30:39.180506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:13.828 [2024-11-20 09:30:39.180528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.193750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.193870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.828 [2024-11-20 09:30:39.193917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.190 ms 00:18:13.828 [2024-11-20 09:30:39.193939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.206096] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:13.828 [2024-11-20 09:30:39.206222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:13.828 [2024-11-20 09:30:39.206279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.206309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:13.828 [2024-11-20 09:30:39.206331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.221 ms 00:18:13.828 [2024-11-20 09:30:39.206350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.230661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.230792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:13.828 [2024-11-20 09:30:39.230846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.234 ms 00:18:13.828 [2024-11-20 09:30:39.230868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.242692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.242808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:13.828 [2024-11-20 09:30:39.242856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.698 ms 00:18:13.828 [2024-11-20 09:30:39.242877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.253846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.253953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:13.828 [2024-11-20 09:30:39.253999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.898 ms 00:18:13.828 [2024-11-20 09:30:39.254019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.828 [2024-11-20 09:30:39.254677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.828 [2024-11-20 09:30:39.254765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.828 [2024-11-20 09:30:39.254812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:18:13.828 [2024-11-20 09:30:39.254834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.157 [2024-11-20 09:30:39.309171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.157 [2024-11-20 09:30:39.309383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:14.157 [2024-11-20 09:30:39.309444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.299 ms 00:18:14.157 [2024-11-20 09:30:39.309467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.319865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:14.158 [2024-11-20 09:30:39.333913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.334047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:14.158 [2024-11-20 09:30:39.334167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.330 ms 00:18:14.158 [2024-11-20 09:30:39.334197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.334326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.334354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:14.158 [2024-11-20 09:30:39.334375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:14.158 [2024-11-20 09:30:39.334394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.334453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.334652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:14.158 [2024-11-20 09:30:39.334823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:14.158 [2024-11-20 09:30:39.334842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.334881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.334905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:14.158 [2024-11-20 09:30:39.334924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:14.158 [2024-11-20 09:30:39.334942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.334984] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:14.158 [2024-11-20 09:30:39.335059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.335148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:14.158 [2024-11-20 09:30:39.335168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:14.158 [2024-11-20 09:30:39.335186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.358675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.358797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:14.158 [2024-11-20 09:30:39.358851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.457 ms 00:18:14.158 [2024-11-20 09:30:39.358875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.359070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.158 [2024-11-20 09:30:39.359216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:14.158 [2024-11-20 09:30:39.359242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:14.158 [2024-11-20 09:30:39.359260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.158 [2024-11-20 09:30:39.360067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.158 [2024-11-20 09:30:39.363179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.298 ms, result 0 00:18:14.158 [2024-11-20 09:30:39.363816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.158 [2024-11-20 09:30:39.376698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:15.104  [2024-11-20T09:30:41.493Z] Copying: 41/256 [MB] (41 MBps) [2024-11-20T09:30:42.426Z] Copying: 85/256 [MB] (44 MBps) [2024-11-20T09:30:43.799Z] Copying: 126/256 [MB] (41 MBps) [2024-11-20T09:30:44.731Z] Copying: 170/256 [MB] (43 MBps) [2024-11-20T09:30:45.296Z] Copying: 214/256 [MB] (43 MBps) [2024-11-20T09:30:45.296Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-20 09:30:45.279852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.841 [2024-11-20 09:30:45.287137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.841 [2024-11-20 09:30:45.287297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:19.841 [2024-11-20 09:30:45.287328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:19.841 [2024-11-20 09:30:45.287340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.841 [2024-11-20 09:30:45.287361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:19.841 [2024-11-20 09:30:45.289422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.841 [2024-11-20 09:30:45.289442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:19.841 [2024-11-20 09:30:45.289450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:18:19.841 [2024-11-20 09:30:45.289457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.841 [2024-11-20 09:30:45.289661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.841 [2024-11-20 09:30:45.289673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:19.841 [2024-11-20 09:30:45.289680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:19.841 [2024-11-20 09:30:45.289686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.841 [2024-11-20 09:30:45.292506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.841 [2024-11-20 09:30:45.292527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:19.841 [2024-11-20 09:30:45.292534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:18:19.841 [2024-11-20 09:30:45.292541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.297742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.297839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:20.100 [2024-11-20 09:30:45.297850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:18:20.100 [2024-11-20 09:30:45.297856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.315642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.315669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.100 [2024-11-20 09:30:45.315677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.742 ms 00:18:20.100 [2024-11-20 09:30:45.315683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.327240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.327276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.100 [2024-11-20 09:30:45.327286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.509 ms 00:18:20.100 [2024-11-20 09:30:45.327295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.327405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.327413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.100 [2024-11-20 09:30:45.327420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:20.100 [2024-11-20 09:30:45.327426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.345402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.345428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:20.100 [2024-11-20 09:30:45.345437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.956 ms 00:18:20.100 [2024-11-20 09:30:45.345444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.362980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.363006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:20.100 [2024-11-20 09:30:45.363014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.496 ms 00:18:20.100 [2024-11-20 09:30:45.363019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.380408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.380531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:20.100 [2024-11-20 09:30:45.380544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.361 ms 00:18:20.100 [2024-11-20 09:30:45.380550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.397621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.100 [2024-11-20 09:30:45.397648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:20.100 [2024-11-20 09:30:45.397656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.017 ms 00:18:20.100 [2024-11-20 09:30:45.397662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.100 [2024-11-20 09:30:45.397690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:20.100 [2024-11-20 09:30:45.397703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:20.100 [2024-11-20 09:30:45.397932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.397996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:20.101 [2024-11-20 09:30:45.398325] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:20.101 [2024-11-20 09:30:45.398332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:18:20.101 [2024-11-20 09:30:45.398339] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:20.101 [2024-11-20 09:30:45.398345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:20.101 [2024-11-20 09:30:45.398351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:20.101 [2024-11-20 09:30:45.398356] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:20.101 [2024-11-20 09:30:45.398362] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:20.101 [2024-11-20 09:30:45.398368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:20.101 [2024-11-20 09:30:45.398374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:20.101 [2024-11-20 09:30:45.398379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:20.101 [2024-11-20 09:30:45.398384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:20.101 [2024-11-20 09:30:45.398390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.101 [2024-11-20 09:30:45.398398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:20.101 [2024-11-20 09:30:45.398406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:18:20.101 [2024-11-20 09:30:45.398412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.101 [2024-11-20 09:30:45.408192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.101 [2024-11-20 09:30:45.408293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:20.101 [2024-11-20 09:30:45.408313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.766 ms 00:18:20.101 [2024-11-20 09:30:45.408320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.101 [2024-11-20 09:30:45.408607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.101 [2024-11-20 09:30:45.408620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:20.101 [2024-11-20 09:30:45.408627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:20.101 [2024-11-20 09:30:45.408633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.101 [2024-11-20 09:30:45.435756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.101 [2024-11-20 09:30:45.435852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.101 [2024-11-20 09:30:45.435865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.101 [2024-11-20 09:30:45.435872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.101 [2024-11-20 09:30:45.435952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.101 [2024-11-20 09:30:45.435960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.101 [2024-11-20 09:30:45.435966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.101 [2024-11-20 09:30:45.435972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.101 [2024-11-20 09:30:45.436006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.101 [2024-11-20 09:30:45.436014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.101 [2024-11-20 09:30:45.436019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.436025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.436038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.436047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.102 [2024-11-20 09:30:45.436053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.436058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.495417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.495453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.102 [2024-11-20 09:30:45.495463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.495470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.545580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.545720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.102 [2024-11-20 09:30:45.545732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.545739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.545801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.545809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.102 [2024-11-20 09:30:45.545815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.545821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.545845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.545851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.102 [2024-11-20 09:30:45.545859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.545865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.545940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.545948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.102 [2024-11-20 09:30:45.545955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.545960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.545987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.545994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:20.102 [2024-11-20 09:30:45.546000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.546008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.546037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.546043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.102 [2024-11-20 09:30:45.546050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.546055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.546088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.102 [2024-11-20 09:30:45.546096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.102 [2024-11-20 09:30:45.546104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.102 [2024-11-20 09:30:45.546110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.102 [2024-11-20 09:30:45.546215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.071 ms, result 0 00:18:20.666 00:18:20.666 00:18:20.666 09:30:46 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:20.923 09:30:46 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:21.487 09:30:46 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.488 [2024-11-20 09:30:46.719991] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:21.488 [2024-11-20 09:30:46.720517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74233 ] 00:18:21.488 [2024-11-20 09:30:46.880013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.745 [2024-11-20 09:30:46.979951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.004 [2024-11-20 09:30:47.232708] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.004 [2024-11-20 09:30:47.232772] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.004 [2024-11-20 09:30:47.390784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.391035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.004 [2024-11-20 09:30:47.391066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.004 [2024-11-20 09:30:47.391079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.395021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.395078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.004 [2024-11-20 09:30:47.395097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:18:22.004 [2024-11-20 09:30:47.395111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.395287] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.004 [2024-11-20 09:30:47.396423] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.004 [2024-11-20 09:30:47.396474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.396491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.004 [2024-11-20 09:30:47.396507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:18:22.004 [2024-11-20 09:30:47.396520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.398008] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.004 [2024-11-20 09:30:47.418103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.418176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.004 [2024-11-20 09:30:47.418196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.096 ms 00:18:22.004 [2024-11-20 09:30:47.418209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.418368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.418388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.004 [2024-11-20 09:30:47.418403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:22.004 [2024-11-20 09:30:47.418415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.424163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.424222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.004 [2024-11-20 09:30:47.424241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.685 ms 00:18:22.004 [2024-11-20 09:30:47.424255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.424406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.424423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.004 [2024-11-20 09:30:47.424437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:22.004 [2024-11-20 09:30:47.424450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.424491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.424509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.004 [2024-11-20 09:30:47.424523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.004 [2024-11-20 09:30:47.424534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.424568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:22.004 [2024-11-20 09:30:47.429861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.429912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.004 [2024-11-20 09:30:47.429929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:18:22.004 [2024-11-20 09:30:47.429942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.430035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.430052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.004 [2024-11-20 09:30:47.430067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:22.004 [2024-11-20 09:30:47.430079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.430111] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.004 [2024-11-20 09:30:47.430143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.004 [2024-11-20 09:30:47.430194] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.004 [2024-11-20 09:30:47.430219] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.004 [2024-11-20 09:30:47.430387] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.004 [2024-11-20 09:30:47.430407] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.004 [2024-11-20 09:30:47.430424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.004 [2024-11-20 09:30:47.430440] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.004 [2024-11-20 09:30:47.430458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.004 [2024-11-20 09:30:47.430471] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:22.004 [2024-11-20 09:30:47.430483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.004 [2024-11-20 09:30:47.430495] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.004 [2024-11-20 09:30:47.430520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.004 [2024-11-20 09:30:47.430535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.430548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.004 [2024-11-20 09:30:47.430562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:18:22.004 [2024-11-20 09:30:47.430575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.430701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.004 [2024-11-20 09:30:47.430716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.004 [2024-11-20 09:30:47.430734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:22.004 [2024-11-20 09:30:47.430747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.004 [2024-11-20 09:30:47.430889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.004 [2024-11-20 09:30:47.430913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.004 [2024-11-20 09:30:47.430928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.004 [2024-11-20 09:30:47.430942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.004 [2024-11-20 09:30:47.430955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.004 [2024-11-20 09:30:47.430967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.004 [2024-11-20 09:30:47.430980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:22.004 [2024-11-20 09:30:47.430991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.004 [2024-11-20 09:30:47.431004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:22.004 [2024-11-20 09:30:47.431016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.004 [2024-11-20 09:30:47.431028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.004 [2024-11-20 09:30:47.431040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:22.004 [2024-11-20 09:30:47.431052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.004 [2024-11-20 09:30:47.431073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.004 [2024-11-20 09:30:47.431086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:22.004 [2024-11-20 09:30:47.431097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.004 [2024-11-20 09:30:47.431110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.004 [2024-11-20 09:30:47.431122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:22.004 [2024-11-20 09:30:47.431134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.004 [2024-11-20 09:30:47.431146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.004 [2024-11-20 09:30:47.431158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:22.004 [2024-11-20 09:30:47.431171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.005 [2024-11-20 09:30:47.431183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.005 [2024-11-20 09:30:47.431195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.005 [2024-11-20 09:30:47.431218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.005 [2024-11-20 09:30:47.431231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.005 [2024-11-20 09:30:47.431254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.005 [2024-11-20 09:30:47.431266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.005 [2024-11-20 09:30:47.431290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.005 [2024-11-20 09:30:47.431318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.005 [2024-11-20 09:30:47.431345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.005 [2024-11-20 09:30:47.431356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:22.005 [2024-11-20 09:30:47.431367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.005 [2024-11-20 09:30:47.431380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.005 [2024-11-20 09:30:47.431392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:22.005 [2024-11-20 09:30:47.431405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.005 [2024-11-20 09:30:47.431429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:22.005 [2024-11-20 09:30:47.431440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431450] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.005 [2024-11-20 09:30:47.431463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.005 [2024-11-20 09:30:47.431476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.005 [2024-11-20 09:30:47.431493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.005 [2024-11-20 09:30:47.431507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.005 [2024-11-20 09:30:47.431520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.005 [2024-11-20 09:30:47.431532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.005 [2024-11-20 09:30:47.431544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.005 [2024-11-20 09:30:47.431556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.005 [2024-11-20 09:30:47.431568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.005 [2024-11-20 09:30:47.431582] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.005 [2024-11-20 09:30:47.431597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:22.005 [2024-11-20 09:30:47.431626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:22.005 [2024-11-20 09:30:47.431638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:22.005 [2024-11-20 09:30:47.431651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:22.005 [2024-11-20 09:30:47.431664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:22.005 [2024-11-20 09:30:47.431677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:22.005 [2024-11-20 09:30:47.431690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:22.005 [2024-11-20 09:30:47.431702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:22.005 [2024-11-20 09:30:47.431715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:22.005 [2024-11-20 09:30:47.431728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:22.005 [2024-11-20 09:30:47.431791] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.005 [2024-11-20 09:30:47.431804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.005 [2024-11-20 09:30:47.431830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.005 [2024-11-20 09:30:47.431841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.005 [2024-11-20 09:30:47.431853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.005 [2024-11-20 09:30:47.431866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.005 [2024-11-20 09:30:47.431880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.005 [2024-11-20 09:30:47.431898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:18:22.005 [2024-11-20 09:30:47.431911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.471905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.472148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.263 [2024-11-20 09:30:47.472315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.911 ms 00:18:22.263 [2024-11-20 09:30:47.472367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.472592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.472663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.263 [2024-11-20 09:30:47.472774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:22.263 [2024-11-20 09:30:47.472902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.529289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.529512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.263 [2024-11-20 09:30:47.529577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.313 ms 00:18:22.263 [2024-11-20 09:30:47.529607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.529740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.529900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.263 [2024-11-20 09:30:47.529924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.263 [2024-11-20 09:30:47.529943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.530318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.530415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.263 [2024-11-20 09:30:47.530466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:18:22.263 [2024-11-20 09:30:47.530494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.530652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.530676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.263 [2024-11-20 09:30:47.530724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:22.263 [2024-11-20 09:30:47.530734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.544253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.544389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.263 [2024-11-20 09:30:47.544443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.496 ms 00:18:22.263 [2024-11-20 09:30:47.544465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.556856] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:22.263 [2024-11-20 09:30:47.556995] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.263 [2024-11-20 09:30:47.557058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.557079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.263 [2024-11-20 09:30:47.557129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.470 ms 00:18:22.263 [2024-11-20 09:30:47.557149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.582196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.582422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.263 [2024-11-20 09:30:47.582482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.582 ms 00:18:22.263 [2024-11-20 09:30:47.582524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.263 [2024-11-20 09:30:47.594124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.263 [2024-11-20 09:30:47.594254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.264 [2024-11-20 09:30:47.594312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.485 ms 00:18:22.264 [2024-11-20 09:30:47.594336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.605766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.605883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.264 [2024-11-20 09:30:47.605930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.348 ms 00:18:22.264 [2024-11-20 09:30:47.605950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.606624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.606717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.264 [2024-11-20 09:30:47.606764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:18:22.264 [2024-11-20 09:30:47.606786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.661693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.661830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.264 [2024-11-20 09:30:47.661885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.868 ms 00:18:22.264 [2024-11-20 09:30:47.661908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.672685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.264 [2024-11-20 09:30:47.686947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.687077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.264 [2024-11-20 09:30:47.687129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.866 ms 00:18:22.264 [2024-11-20 09:30:47.687151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.687257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.687285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.264 [2024-11-20 09:30:47.687331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:22.264 [2024-11-20 09:30:47.687383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.687450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.687531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.264 [2024-11-20 09:30:47.687555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:22.264 [2024-11-20 09:30:47.687564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.687594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.687605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.264 [2024-11-20 09:30:47.687614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.264 [2024-11-20 09:30:47.687621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.687650] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.264 [2024-11-20 09:30:47.687659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.687667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.264 [2024-11-20 09:30:47.687674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.264 [2024-11-20 09:30:47.687682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.711004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.711139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.264 [2024-11-20 09:30:47.711156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.300 ms 00:18:22.264 [2024-11-20 09:30:47.711164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.711259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.264 [2024-11-20 09:30:47.711270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.264 [2024-11-20 09:30:47.711278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:22.264 [2024-11-20 09:30:47.711286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.264 [2024-11-20 09:30:47.712089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.264 [2024-11-20 09:30:47.715045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.057 ms, result 0 00:18:22.264 [2024-11-20 09:30:47.715607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.521 [2024-11-20 09:30:47.728480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.521  [2024-11-20T09:30:47.977Z] Copying: 4096/4096 [kB] (average 44 MBps)[2024-11-20 09:30:47.821468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.521 [2024-11-20 09:30:47.830650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.830688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:22.521 [2024-11-20 09:30:47.830700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.521 [2024-11-20 09:30:47.830713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.830734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:22.521 [2024-11-20 09:30:47.833347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.833373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:22.521 [2024-11-20 09:30:47.833383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.601 ms 00:18:22.521 [2024-11-20 09:30:47.833392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.835103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.835136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:22.521 [2024-11-20 09:30:47.835145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:18:22.521 [2024-11-20 09:30:47.835152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.839055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.839085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:22.521 [2024-11-20 09:30:47.839094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:18:22.521 [2024-11-20 09:30:47.839101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.846075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.846106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:22.521 [2024-11-20 09:30:47.846117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.949 ms 00:18:22.521 [2024-11-20 09:30:47.846125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.869519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.869552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:22.521 [2024-11-20 09:30:47.869563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.340 ms 00:18:22.521 [2024-11-20 09:30:47.869571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.883485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.883526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:22.521 [2024-11-20 09:30:47.883540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.862 ms 00:18:22.521 [2024-11-20 09:30:47.883549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.883681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.521 [2024-11-20 09:30:47.883691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:22.521 [2024-11-20 09:30:47.883699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:22.521 [2024-11-20 09:30:47.883707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.521 [2024-11-20 09:30:47.906642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.522 [2024-11-20 09:30:47.906684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:22.522 [2024-11-20 09:30:47.906695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.910 ms 00:18:22.522 [2024-11-20 09:30:47.906703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.522 [2024-11-20 09:30:47.929268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.522 [2024-11-20 09:30:47.929424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:22.522 [2024-11-20 09:30:47.929440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.528 ms 00:18:22.522 [2024-11-20 09:30:47.929447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.522 [2024-11-20 09:30:47.951145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.522 [2024-11-20 09:30:47.951176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:22.522 [2024-11-20 09:30:47.951187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.663 ms 00:18:22.522 [2024-11-20 09:30:47.951194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.522 [2024-11-20 09:30:47.973246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.781 [2024-11-20 09:30:47.973386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:22.781 [2024-11-20 09:30:47.973402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.989 ms 00:18:22.781 [2024-11-20 09:30:47.973409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.781 [2024-11-20 09:30:47.973634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:22.781 [2024-11-20 09:30:47.973718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.973986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.974989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:22.781 [2024-11-20 09:30:47.975222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.975994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.976015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.976035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:22.782 [2024-11-20 09:30:47.976082] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:22.782 [2024-11-20 09:30:47.976103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:18:22.782 [2024-11-20 09:30:47.976124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:22.782 [2024-11-20 09:30:47.976143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:22.782 [2024-11-20 09:30:47.976162] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:22.782 [2024-11-20 09:30:47.976182] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:22.782 [2024-11-20 09:30:47.976200] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:22.782 [2024-11-20 09:30:47.976220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:22.782 [2024-11-20 09:30:47.976238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:22.782 [2024-11-20 09:30:47.976256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:22.782 [2024-11-20 09:30:47.976273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:22.782 [2024-11-20 09:30:47.976295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.782 [2024-11-20 09:30:47.976341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:22.782 [2024-11-20 09:30:47.976367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:18:22.782 [2024-11-20 09:30:47.976388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:47.996059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.782 [2024-11-20 09:30:47.996205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:22.782 [2024-11-20 09:30:47.996271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.582 ms 00:18:22.782 [2024-11-20 09:30:47.996322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:47.996744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.782 [2024-11-20 09:30:47.996827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:22.782 [2024-11-20 09:30:47.996877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:22.782 [2024-11-20 09:30:47.996899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.031524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.031672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.782 [2024-11-20 09:30:48.031724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.031746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.031844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.031899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.782 [2024-11-20 09:30:48.031921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.031940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.032054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.032080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.782 [2024-11-20 09:30:48.032099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.032117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.032181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.032210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.782 [2024-11-20 09:30:48.032229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.032248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.109228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.109409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.782 [2024-11-20 09:30:48.109461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.109483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.172706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.172906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.782 [2024-11-20 09:30:48.172956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.172977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.173045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.782 [2024-11-20 09:30:48.173068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.782 [2024-11-20 09:30:48.173087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.782 [2024-11-20 09:30:48.173105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.782 [2024-11-20 09:30:48.173143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.783 [2024-11-20 09:30:48.173162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.783 [2024-11-20 09:30:48.173189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.783 [2024-11-20 09:30:48.173246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.783 [2024-11-20 09:30:48.173379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.783 [2024-11-20 09:30:48.173404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.783 [2024-11-20 09:30:48.173423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.783 [2024-11-20 09:30:48.173442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.783 [2024-11-20 09:30:48.173549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.783 [2024-11-20 09:30:48.173574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:22.783 [2024-11-20 09:30:48.173593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.783 [2024-11-20 09:30:48.173616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.783 [2024-11-20 09:30:48.173663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.783 [2024-11-20 09:30:48.173721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.783 [2024-11-20 09:30:48.173743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.783 [2024-11-20 09:30:48.173760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.783 [2024-11-20 09:30:48.173817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.783 [2024-11-20 09:30:48.173840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.783 [2024-11-20 09:30:48.173892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.783 [2024-11-20 09:30:48.174144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.783 [2024-11-20 09:30:48.174390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.727 ms, result 0 00:18:23.744 00:18:23.744 00:18:23.744 09:30:48 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74258 00:18:23.744 09:30:48 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:23.744 09:30:48 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74258 00:18:23.744 09:30:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74258 ']' 00:18:23.744 09:30:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.744 09:30:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.744 09:30:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.744 09:30:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.744 09:30:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:23.744 [2024-11-20 09:30:48.954791] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:23.744 [2024-11-20 09:30:48.955089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74258 ] 00:18:23.744 [2024-11-20 09:30:49.113966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.000 [2024-11-20 09:30:49.211521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.563 09:30:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.563 09:30:49 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:24.563 09:30:49 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:24.821 [2024-11-20 09:30:50.028547] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.821 [2024-11-20 09:30:50.028614] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.821 [2024-11-20 09:30:50.198667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.198723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.821 [2024-11-20 09:30:50.198739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.821 [2024-11-20 09:30:50.198747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.201429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.201461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.821 [2024-11-20 09:30:50.201473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:18:24.821 [2024-11-20 09:30:50.201480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.201585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.821 [2024-11-20 09:30:50.202293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.821 [2024-11-20 09:30:50.202332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.202340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.821 [2024-11-20 09:30:50.202350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:18:24.821 [2024-11-20 09:30:50.202357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.203497] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:24.821 [2024-11-20 09:30:50.215797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.215834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:24.821 [2024-11-20 09:30:50.215847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.305 ms 00:18:24.821 [2024-11-20 09:30:50.215857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.215942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.215955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:24.821 [2024-11-20 09:30:50.215964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:24.821 [2024-11-20 09:30:50.215972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.220845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.220880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.821 [2024-11-20 09:30:50.220889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:18:24.821 [2024-11-20 09:30:50.220899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.220993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.221005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.821 [2024-11-20 09:30:50.221014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:24.821 [2024-11-20 09:30:50.221026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.221054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.221064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.821 [2024-11-20 09:30:50.221071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:24.821 [2024-11-20 09:30:50.221080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.221103] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:24.821 [2024-11-20 09:30:50.224515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.224541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.821 [2024-11-20 09:30:50.224552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:18:24.821 [2024-11-20 09:30:50.224560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.224598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.224606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.821 [2024-11-20 09:30:50.224615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:24.821 [2024-11-20 09:30:50.224625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.821 [2024-11-20 09:30:50.224647] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:24.821 [2024-11-20 09:30:50.224663] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:24.821 [2024-11-20 09:30:50.224704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:24.821 [2024-11-20 09:30:50.224718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:24.821 [2024-11-20 09:30:50.224823] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:24.821 [2024-11-20 09:30:50.224833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.821 [2024-11-20 09:30:50.224849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:24.821 [2024-11-20 09:30:50.224858] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.821 [2024-11-20 09:30:50.224869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.821 [2024-11-20 09:30:50.224877] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:24.821 [2024-11-20 09:30:50.224885] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.821 [2024-11-20 09:30:50.224892] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:24.821 [2024-11-20 09:30:50.224902] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:24.821 [2024-11-20 09:30:50.224909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.821 [2024-11-20 09:30:50.224918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.821 [2024-11-20 09:30:50.224926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:18:24.822 [2024-11-20 09:30:50.224936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.822 [2024-11-20 09:30:50.225033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.822 [2024-11-20 09:30:50.225043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.822 [2024-11-20 09:30:50.225050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:24.822 [2024-11-20 09:30:50.225060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.822 [2024-11-20 09:30:50.225159] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.822 [2024-11-20 09:30:50.225170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.822 [2024-11-20 09:30:50.225178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.822 [2024-11-20 09:30:50.225202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.822 [2024-11-20 09:30:50.225228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.822 [2024-11-20 09:30:50.225243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.822 [2024-11-20 09:30:50.225251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:24.822 [2024-11-20 09:30:50.225257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.822 [2024-11-20 09:30:50.225265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.822 [2024-11-20 09:30:50.225271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:24.822 [2024-11-20 09:30:50.225279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.822 [2024-11-20 09:30:50.225293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.822 [2024-11-20 09:30:50.225342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.822 [2024-11-20 09:30:50.225366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.822 [2024-11-20 09:30:50.225388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.822 [2024-11-20 09:30:50.225410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.822 [2024-11-20 09:30:50.225432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.822 [2024-11-20 09:30:50.225448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.822 [2024-11-20 09:30:50.225456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:24.822 [2024-11-20 09:30:50.225462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.822 [2024-11-20 09:30:50.225470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:24.822 [2024-11-20 09:30:50.225476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:24.822 [2024-11-20 09:30:50.225485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:24.822 [2024-11-20 09:30:50.225499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:24.822 [2024-11-20 09:30:50.225507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.822 [2024-11-20 09:30:50.225523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.822 [2024-11-20 09:30:50.225531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.822 [2024-11-20 09:30:50.225547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.822 [2024-11-20 09:30:50.225554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.822 [2024-11-20 09:30:50.225561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.822 [2024-11-20 09:30:50.225568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.822 [2024-11-20 09:30:50.225576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.822 [2024-11-20 09:30:50.225583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.822 [2024-11-20 09:30:50.225592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.822 [2024-11-20 09:30:50.225601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:24.822 [2024-11-20 09:30:50.225619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:24.822 [2024-11-20 09:30:50.225629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:24.822 [2024-11-20 09:30:50.225636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:24.822 [2024-11-20 09:30:50.225645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:24.822 [2024-11-20 09:30:50.225652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:24.822 [2024-11-20 09:30:50.225660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:24.822 [2024-11-20 09:30:50.225667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:24.822 [2024-11-20 09:30:50.225675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:24.822 [2024-11-20 09:30:50.225682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:24.822 [2024-11-20 09:30:50.225721] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.822 [2024-11-20 09:30:50.225729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.822 [2024-11-20 09:30:50.225747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.822 [2024-11-20 09:30:50.225756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.822 [2024-11-20 09:30:50.225762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.822 [2024-11-20 09:30:50.225771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.822 [2024-11-20 09:30:50.225778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.822 [2024-11-20 09:30:50.225787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:18:24.822 [2024-11-20 09:30:50.225795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.822 [2024-11-20 09:30:50.251611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.822 [2024-11-20 09:30:50.251767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.822 [2024-11-20 09:30:50.251832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.758 ms 00:18:24.822 [2024-11-20 09:30:50.251858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.822 [2024-11-20 09:30:50.252009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.822 [2024-11-20 09:30:50.252035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.822 [2024-11-20 09:30:50.252093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:24.822 [2024-11-20 09:30:50.252115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.282347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.282520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.080 [2024-11-20 09:30:50.282657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.192 ms 00:18:25.080 [2024-11-20 09:30:50.282680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.282768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.282928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.080 [2024-11-20 09:30:50.282955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:25.080 [2024-11-20 09:30:50.282975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.283295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.283418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.080 [2024-11-20 09:30:50.283473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:25.080 [2024-11-20 09:30:50.283495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.283633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.283655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.080 [2024-11-20 09:30:50.283740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:25.080 [2024-11-20 09:30:50.283762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.297792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.297914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.080 [2024-11-20 09:30:50.297978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.993 ms 00:18:25.080 [2024-11-20 09:30:50.298001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.310310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:25.080 [2024-11-20 09:30:50.310437] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:25.080 [2024-11-20 09:30:50.310515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.310537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:25.080 [2024-11-20 09:30:50.310559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.364 ms 00:18:25.080 [2024-11-20 09:30:50.310578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.334682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.334800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:25.080 [2024-11-20 09:30:50.334853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.021 ms 00:18:25.080 [2024-11-20 09:30:50.334875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.346166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.346268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:25.080 [2024-11-20 09:30:50.346350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.205 ms 00:18:25.080 [2024-11-20 09:30:50.346373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.357710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.357810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:25.080 [2024-11-20 09:30:50.357860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.264 ms 00:18:25.080 [2024-11-20 09:30:50.357882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.358530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.358616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:25.080 [2024-11-20 09:30:50.358665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:18:25.080 [2024-11-20 09:30:50.358687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.422523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.422720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:25.080 [2024-11-20 09:30:50.422785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.796 ms 00:18:25.080 [2024-11-20 09:30:50.422810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.433417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:25.080 [2024-11-20 09:30:50.447981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.448136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:25.080 [2024-11-20 09:30:50.448187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.062 ms 00:18:25.080 [2024-11-20 09:30:50.448212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.448328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.448357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:25.080 [2024-11-20 09:30:50.448377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:25.080 [2024-11-20 09:30:50.448398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.448456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.448537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:25.080 [2024-11-20 09:30:50.448562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:25.080 [2024-11-20 09:30:50.448584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.448620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.448642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:25.080 [2024-11-20 09:30:50.448661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:25.080 [2024-11-20 09:30:50.448683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.448763] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:25.080 [2024-11-20 09:30:50.448822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.448847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:25.080 [2024-11-20 09:30:50.448889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:25.080 [2024-11-20 09:30:50.448910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.472081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.472211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:25.080 [2024-11-20 09:30:50.472231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.131 ms 00:18:25.080 [2024-11-20 09:30:50.472240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.472342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.080 [2024-11-20 09:30:50.472354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:25.080 [2024-11-20 09:30:50.472366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:25.080 [2024-11-20 09:30:50.472373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.080 [2024-11-20 09:30:50.473100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.080 [2024-11-20 09:30:50.476116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.166 ms, result 0 00:18:25.080 [2024-11-20 09:30:50.477104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:25.080 Some configs were skipped because the RPC state that can call them passed over. 00:18:25.080 09:30:50 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:25.338 [2024-11-20 09:30:50.711358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.338 [2024-11-20 09:30:50.711506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:25.338 [2024-11-20 09:30:50.711566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:18:25.338 [2024-11-20 09:30:50.711592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.338 [2024-11-20 09:30:50.711642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.703 ms, result 0 00:18:25.338 true 00:18:25.338 09:30:50 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:25.595 [2024-11-20 09:30:50.919281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.595 [2024-11-20 09:30:50.919453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:25.595 [2024-11-20 09:30:50.919511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:18:25.595 [2024-11-20 09:30:50.919535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.595 [2024-11-20 09:30:50.919589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.369 ms, result 0 00:18:25.595 true 00:18:25.595 09:30:50 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74258 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74258 ']' 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74258 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74258 00:18:25.595 killing process with pid 74258 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74258' 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74258 00:18:25.595 09:30:50 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74258 00:18:26.553 [2024-11-20 09:30:51.654613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.654664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.553 [2024-11-20 09:30:51.654675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:26.553 [2024-11-20 09:30:51.654682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.654701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:26.553 [2024-11-20 09:30:51.656788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.656814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.553 [2024-11-20 09:30:51.656826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:18:26.553 [2024-11-20 09:30:51.656833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.657055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.657062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.553 [2024-11-20 09:30:51.657070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:18:26.553 [2024-11-20 09:30:51.657076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.660330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.660354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.553 [2024-11-20 09:30:51.660366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:18:26.553 [2024-11-20 09:30:51.660372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.665778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.665890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.553 [2024-11-20 09:30:51.665906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.376 ms 00:18:26.553 [2024-11-20 09:30:51.665912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.673726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.673751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.553 [2024-11-20 09:30:51.673761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.767 ms 00:18:26.553 [2024-11-20 09:30:51.673772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.679716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.679745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.553 [2024-11-20 09:30:51.679754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.914 ms 00:18:26.553 [2024-11-20 09:30:51.679761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.679869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.679877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.553 [2024-11-20 09:30:51.679886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:26.553 [2024-11-20 09:30:51.679891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.687903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.687927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:26.553 [2024-11-20 09:30:51.687936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.993 ms 00:18:26.553 [2024-11-20 09:30:51.687941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.695381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.695407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:26.553 [2024-11-20 09:30:51.695418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.408 ms 00:18:26.553 [2024-11-20 09:30:51.695424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.702327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.702351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.553 [2024-11-20 09:30:51.702363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:18:26.553 [2024-11-20 09:30:51.702368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.709842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.553 [2024-11-20 09:30:51.709868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.553 [2024-11-20 09:30:51.709878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.419 ms 00:18:26.553 [2024-11-20 09:30:51.709884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.553 [2024-11-20 09:30:51.709922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.553 [2024-11-20 09:30:51.709935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.709995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.553 [2024-11-20 09:30:51.710215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.554 [2024-11-20 09:30:51.710658] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.554 [2024-11-20 09:30:51.710667] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:18:26.554 [2024-11-20 09:30:51.710681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.554 [2024-11-20 09:30:51.710688] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.554 [2024-11-20 09:30:51.710693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.554 [2024-11-20 09:30:51.710701] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.554 [2024-11-20 09:30:51.710706] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.554 [2024-11-20 09:30:51.710714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.554 [2024-11-20 09:30:51.710720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.554 [2024-11-20 09:30:51.710726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.554 [2024-11-20 09:30:51.710731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.554 [2024-11-20 09:30:51.710738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.554 [2024-11-20 09:30:51.710744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.554 [2024-11-20 09:30:51.710752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:18:26.554 [2024-11-20 09:30:51.710760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.554 [2024-11-20 09:30:51.721092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.554 [2024-11-20 09:30:51.721225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.554 [2024-11-20 09:30:51.721244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.312 ms 00:18:26.554 [2024-11-20 09:30:51.721250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.554 [2024-11-20 09:30:51.721571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.554 [2024-11-20 09:30:51.721585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.554 [2024-11-20 09:30:51.721597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:26.554 [2024-11-20 09:30:51.721603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.554 [2024-11-20 09:30:51.756783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.554 [2024-11-20 09:30:51.756811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.554 [2024-11-20 09:30:51.756821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.554 [2024-11-20 09:30:51.756828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.554 [2024-11-20 09:30:51.756918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.756926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.555 [2024-11-20 09:30:51.756936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.756942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.756977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.756984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.555 [2024-11-20 09:30:51.756993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.756998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.757013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.757020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.555 [2024-11-20 09:30:51.757027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.757034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.817772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.817815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.555 [2024-11-20 09:30:51.817827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.817833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.555 [2024-11-20 09:30:51.868378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.555 [2024-11-20 09:30:51.868484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.555 [2024-11-20 09:30:51.868528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.555 [2024-11-20 09:30:51.868626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.555 [2024-11-20 09:30:51.868671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.555 [2024-11-20 09:30:51.868722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.555 [2024-11-20 09:30:51.868771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.555 [2024-11-20 09:30:51.868778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.555 [2024-11-20 09:30:51.868784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.555 [2024-11-20 09:30:51.868887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 214.259 ms, result 0 00:18:27.120 09:30:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.377 [2024-11-20 09:30:52.587153] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:27.378 [2024-11-20 09:30:52.587283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74305 ] 00:18:27.378 [2024-11-20 09:30:52.749356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.635 [2024-11-20 09:30:52.847787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.894 [2024-11-20 09:30:53.101448] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.894 [2024-11-20 09:30:53.101509] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.894 [2024-11-20 09:30:53.255482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.255534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:27.894 [2024-11-20 09:30:53.255547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:27.894 [2024-11-20 09:30:53.255555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.258207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.258243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.894 [2024-11-20 09:30:53.258253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:18:27.894 [2024-11-20 09:30:53.258260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.258365] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:27.894 [2024-11-20 09:30:53.259082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:27.894 [2024-11-20 09:30:53.259109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.259117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.894 [2024-11-20 09:30:53.259126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:18:27.894 [2024-11-20 09:30:53.259133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.260249] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:27.894 [2024-11-20 09:30:53.272496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.272531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:27.894 [2024-11-20 09:30:53.272542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:18:27.894 [2024-11-20 09:30:53.272550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.272640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.272650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:27.894 [2024-11-20 09:30:53.272659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:27.894 [2024-11-20 09:30:53.272666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.277466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.277617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.894 [2024-11-20 09:30:53.277632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:18:27.894 [2024-11-20 09:30:53.277641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.277728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.277738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.894 [2024-11-20 09:30:53.277746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:27.894 [2024-11-20 09:30:53.277753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.277778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.277788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:27.894 [2024-11-20 09:30:53.277796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:27.894 [2024-11-20 09:30:53.277804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.277824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:27.894 [2024-11-20 09:30:53.281056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.281173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.894 [2024-11-20 09:30:53.281188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:18:27.894 [2024-11-20 09:30:53.281197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.281233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.281243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:27.894 [2024-11-20 09:30:53.281252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:27.894 [2024-11-20 09:30:53.281260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.281278] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:27.894 [2024-11-20 09:30:53.281309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:27.894 [2024-11-20 09:30:53.281345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:27.894 [2024-11-20 09:30:53.281362] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:27.894 [2024-11-20 09:30:53.281466] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:27.894 [2024-11-20 09:30:53.281477] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:27.894 [2024-11-20 09:30:53.281489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:27.894 [2024-11-20 09:30:53.281499] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:27.894 [2024-11-20 09:30:53.281512] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:27.894 [2024-11-20 09:30:53.281521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:27.894 [2024-11-20 09:30:53.281529] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:27.894 [2024-11-20 09:30:53.281537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:27.894 [2024-11-20 09:30:53.281545] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:27.894 [2024-11-20 09:30:53.281554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.281562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:27.894 [2024-11-20 09:30:53.281571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:27.894 [2024-11-20 09:30:53.281579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.281668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.894 [2024-11-20 09:30:53.281678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:27.894 [2024-11-20 09:30:53.281689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:27.894 [2024-11-20 09:30:53.281697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.894 [2024-11-20 09:30:53.281813] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:27.894 [2024-11-20 09:30:53.281825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:27.894 [2024-11-20 09:30:53.281834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.894 [2024-11-20 09:30:53.281843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.894 [2024-11-20 09:30:53.281851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:27.894 [2024-11-20 09:30:53.281859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:27.894 [2024-11-20 09:30:53.281867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:27.894 [2024-11-20 09:30:53.281876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:27.894 [2024-11-20 09:30:53.281884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:27.894 [2024-11-20 09:30:53.281892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.894 [2024-11-20 09:30:53.281900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:27.894 [2024-11-20 09:30:53.281907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:27.894 [2024-11-20 09:30:53.281915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.894 [2024-11-20 09:30:53.281929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:27.894 [2024-11-20 09:30:53.281937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:27.894 [2024-11-20 09:30:53.281944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.894 [2024-11-20 09:30:53.281952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:27.894 [2024-11-20 09:30:53.281960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:27.894 [2024-11-20 09:30:53.281967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.894 [2024-11-20 09:30:53.281975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:27.894 [2024-11-20 09:30:53.281983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:27.894 [2024-11-20 09:30:53.281991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.894 [2024-11-20 09:30:53.281998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:27.894 [2024-11-20 09:30:53.282006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:27.894 [2024-11-20 09:30:53.282013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.894 [2024-11-20 09:30:53.282021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:27.894 [2024-11-20 09:30:53.282028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:27.894 [2024-11-20 09:30:53.282036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.895 [2024-11-20 09:30:53.282043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:27.895 [2024-11-20 09:30:53.282051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:27.895 [2024-11-20 09:30:53.282058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.895 [2024-11-20 09:30:53.282066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:27.895 [2024-11-20 09:30:53.282073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:27.895 [2024-11-20 09:30:53.282081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.895 [2024-11-20 09:30:53.282088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:27.895 [2024-11-20 09:30:53.282096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:27.895 [2024-11-20 09:30:53.282102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.895 [2024-11-20 09:30:53.282109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:27.895 [2024-11-20 09:30:53.282116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:27.895 [2024-11-20 09:30:53.282123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.895 [2024-11-20 09:30:53.282129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:27.895 [2024-11-20 09:30:53.282136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:27.895 [2024-11-20 09:30:53.282142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.895 [2024-11-20 09:30:53.282150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:27.895 [2024-11-20 09:30:53.282157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:27.895 [2024-11-20 09:30:53.282164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.895 [2024-11-20 09:30:53.282173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.895 [2024-11-20 09:30:53.282180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:27.895 [2024-11-20 09:30:53.282187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:27.895 [2024-11-20 09:30:53.282194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:27.895 [2024-11-20 09:30:53.282201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:27.895 [2024-11-20 09:30:53.282208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:27.895 [2024-11-20 09:30:53.282214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:27.895 [2024-11-20 09:30:53.282222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:27.895 [2024-11-20 09:30:53.282231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:27.895 [2024-11-20 09:30:53.282246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:27.895 [2024-11-20 09:30:53.282253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:27.895 [2024-11-20 09:30:53.282260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:27.895 [2024-11-20 09:30:53.282268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:27.895 [2024-11-20 09:30:53.282274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:27.895 [2024-11-20 09:30:53.282281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:27.895 [2024-11-20 09:30:53.282288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:27.895 [2024-11-20 09:30:53.282295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:27.895 [2024-11-20 09:30:53.282312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:27.895 [2024-11-20 09:30:53.282348] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:27.895 [2024-11-20 09:30:53.282355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:27.895 [2024-11-20 09:30:53.282371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:27.895 [2024-11-20 09:30:53.282378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:27.895 [2024-11-20 09:30:53.282385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:27.895 [2024-11-20 09:30:53.282392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.895 [2024-11-20 09:30:53.282399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:27.895 [2024-11-20 09:30:53.282410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:18:27.895 [2024-11-20 09:30:53.282417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.895 [2024-11-20 09:30:53.308107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.895 [2024-11-20 09:30:53.308239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.895 [2024-11-20 09:30:53.308296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.641 ms 00:18:27.895 [2024-11-20 09:30:53.308330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.895 [2024-11-20 09:30:53.308473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.895 [2024-11-20 09:30:53.308504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:27.895 [2024-11-20 09:30:53.308524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:27.895 [2024-11-20 09:30:53.308592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.349590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.349758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:28.154 [2024-11-20 09:30:53.349817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.958 ms 00:18:28.154 [2024-11-20 09:30:53.349845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.349968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.349996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:28.154 [2024-11-20 09:30:53.350017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:28.154 [2024-11-20 09:30:53.350035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.350428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.350519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:28.154 [2024-11-20 09:30:53.350668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:28.154 [2024-11-20 09:30:53.350696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.350855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.350922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:28.154 [2024-11-20 09:30:53.350967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:28.154 [2024-11-20 09:30:53.350988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.364040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.364145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:28.154 [2024-11-20 09:30:53.364192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.018 ms 00:18:28.154 [2024-11-20 09:30:53.364214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.376945] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:28.154 [2024-11-20 09:30:53.377081] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:28.154 [2024-11-20 09:30:53.377144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.377166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:28.154 [2024-11-20 09:30:53.377187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:18:28.154 [2024-11-20 09:30:53.377205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.401536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.401684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:28.154 [2024-11-20 09:30:53.401735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.253 ms 00:18:28.154 [2024-11-20 09:30:53.401757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.413723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.154 [2024-11-20 09:30:53.413857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:28.154 [2024-11-20 09:30:53.413914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.635 ms 00:18:28.154 [2024-11-20 09:30:53.413938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.154 [2024-11-20 09:30:53.425121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.425225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:28.155 [2024-11-20 09:30:53.425273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.106 ms 00:18:28.155 [2024-11-20 09:30:53.425294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.425926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.426011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:28.155 [2024-11-20 09:30:53.426083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:28.155 [2024-11-20 09:30:53.426106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.479976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.480153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:28.155 [2024-11-20 09:30:53.480207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.833 ms 00:18:28.155 [2024-11-20 09:30:53.480231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.490864] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:28.155 [2024-11-20 09:30:53.504462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.504611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:28.155 [2024-11-20 09:30:53.504664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.843 ms 00:18:28.155 [2024-11-20 09:30:53.504687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.504800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.504828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:28.155 [2024-11-20 09:30:53.504849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:28.155 [2024-11-20 09:30:53.504868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.504927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.504950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:28.155 [2024-11-20 09:30:53.505025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:28.155 [2024-11-20 09:30:53.505048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.505090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.505115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:28.155 [2024-11-20 09:30:53.505134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:28.155 [2024-11-20 09:30:53.505152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.505194] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:28.155 [2024-11-20 09:30:53.505331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.505353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:28.155 [2024-11-20 09:30:53.505372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:28.155 [2024-11-20 09:30:53.505391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.528954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.529101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:28.155 [2024-11-20 09:30:53.529153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:18:28.155 [2024-11-20 09:30:53.529176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.529279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.155 [2024-11-20 09:30:53.529325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:28.155 [2024-11-20 09:30:53.529347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:28.155 [2024-11-20 09:30:53.529365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.155 [2024-11-20 09:30:53.530216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.155 [2024-11-20 09:30:53.533264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.466 ms, result 0 00:18:28.155 [2024-11-20 09:30:53.533987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:28.155 [2024-11-20 09:30:53.547075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:29.526  [2024-11-20T09:30:55.914Z] Copying: 44/256 [MB] (44 MBps) [2024-11-20T09:30:56.932Z] Copying: 88/256 [MB] (44 MBps) [2024-11-20T09:30:57.912Z] Copying: 134/256 [MB] (45 MBps) [2024-11-20T09:30:58.846Z] Copying: 177/256 [MB] (43 MBps) [2024-11-20T09:30:59.786Z] Copying: 220/256 [MB] (42 MBps) [2024-11-20T09:30:59.786Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-20 09:30:59.770106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.330 [2024-11-20 09:30:59.782816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.330 [2024-11-20 09:30:59.782860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.330 [2024-11-20 09:30:59.782874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.330 [2024-11-20 09:30:59.782888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.330 [2024-11-20 09:30:59.782910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:34.588 [2024-11-20 09:30:59.785543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.785575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.588 [2024-11-20 09:30:59.785586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:18:34.588 [2024-11-20 09:30:59.785595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.785856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.785879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.588 [2024-11-20 09:30:59.785888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:18:34.588 [2024-11-20 09:30:59.785896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.789640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.789670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.588 [2024-11-20 09:30:59.789680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.729 ms 00:18:34.588 [2024-11-20 09:30:59.789688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.796686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.796717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:34.588 [2024-11-20 09:30:59.796726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.978 ms 00:18:34.588 [2024-11-20 09:30:59.796733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.820602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.820645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.588 [2024-11-20 09:30:59.820656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.802 ms 00:18:34.588 [2024-11-20 09:30:59.820663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.834326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.834368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.588 [2024-11-20 09:30:59.834380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.637 ms 00:18:34.588 [2024-11-20 09:30:59.834390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.834540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.834552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.588 [2024-11-20 09:30:59.834561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:34.588 [2024-11-20 09:30:59.834568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.857842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.857884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:34.588 [2024-11-20 09:30:59.857895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.248 ms 00:18:34.588 [2024-11-20 09:30:59.857903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.880721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.880758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:34.588 [2024-11-20 09:30:59.880769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.790 ms 00:18:34.588 [2024-11-20 09:30:59.880777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.903015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.903056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.588 [2024-11-20 09:30:59.903068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.212 ms 00:18:34.588 [2024-11-20 09:30:59.903076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.925153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.588 [2024-11-20 09:30:59.925193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.588 [2024-11-20 09:30:59.925204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.024 ms 00:18:34.588 [2024-11-20 09:30:59.925211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.588 [2024-11-20 09:30:59.925236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.589 [2024-11-20 09:30:59.925251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.589 [2024-11-20 09:30:59.925920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.925993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.926001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.590 [2024-11-20 09:30:59.926017] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.590 [2024-11-20 09:30:59.926024] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eba3f3fd-920f-46ac-aa10-0eb07aaa862a 00:18:34.590 [2024-11-20 09:30:59.926032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.590 [2024-11-20 09:30:59.926040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.590 [2024-11-20 09:30:59.926046] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.590 [2024-11-20 09:30:59.926054] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.590 [2024-11-20 09:30:59.926061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.590 [2024-11-20 09:30:59.926069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.590 [2024-11-20 09:30:59.926076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.590 [2024-11-20 09:30:59.926082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.590 [2024-11-20 09:30:59.926088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.590 [2024-11-20 09:30:59.926095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.590 [2024-11-20 09:30:59.926104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.590 [2024-11-20 09:30:59.926112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:18:34.590 [2024-11-20 09:30:59.926120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.590 [2024-11-20 09:30:59.938552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.590 [2024-11-20 09:30:59.938585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.590 [2024-11-20 09:30:59.938596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.415 ms 00:18:34.590 [2024-11-20 09:30:59.938603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.590 [2024-11-20 09:30:59.938963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.590 [2024-11-20 09:30:59.938984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.590 [2024-11-20 09:30:59.938992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:18:34.590 [2024-11-20 09:30:59.939000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.590 [2024-11-20 09:30:59.973878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.590 [2024-11-20 09:30:59.973925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.590 [2024-11-20 09:30:59.973936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.590 [2024-11-20 09:30:59.973944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.590 [2024-11-20 09:30:59.974047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.590 [2024-11-20 09:30:59.974056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.590 [2024-11-20 09:30:59.974064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.590 [2024-11-20 09:30:59.974072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.590 [2024-11-20 09:30:59.974116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.590 [2024-11-20 09:30:59.974126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.590 [2024-11-20 09:30:59.974134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.590 [2024-11-20 09:30:59.974141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.590 [2024-11-20 09:30:59.974158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.590 [2024-11-20 09:30:59.974169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.590 [2024-11-20 09:30:59.974176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.590 [2024-11-20 09:30:59.974184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.053913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.053971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.848 [2024-11-20 09:31:00.053984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.053992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.848 [2024-11-20 09:31:00.118541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.848 [2024-11-20 09:31:00.118642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.848 [2024-11-20 09:31:00.118697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.848 [2024-11-20 09:31:00.118805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.848 [2024-11-20 09:31:00.118857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.848 [2024-11-20 09:31:00.118917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.118962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.848 [2024-11-20 09:31:00.118971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.848 [2024-11-20 09:31:00.118982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.848 [2024-11-20 09:31:00.118989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.848 [2024-11-20 09:31:00.119116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.305 ms, result 0 00:18:35.413 00:18:35.413 00:18:35.413 09:31:00 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:35.977 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:35.977 09:31:01 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74258 00:18:35.977 09:31:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74258 ']' 00:18:35.977 09:31:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74258 00:18:35.977 Process with pid 74258 is not found 00:18:35.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74258) - No such process 00:18:35.977 09:31:01 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74258 is not found' 00:18:35.977 00:18:35.977 real 0m53.130s 00:18:35.977 user 1m20.319s 00:18:35.977 sys 0m5.184s 00:18:35.977 09:31:01 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.977 09:31:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:35.977 ************************************ 00:18:35.977 END TEST ftl_trim 00:18:35.977 ************************************ 00:18:36.247 09:31:01 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:36.247 09:31:01 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:36.247 09:31:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.247 09:31:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:36.247 ************************************ 00:18:36.247 START TEST ftl_restore 00:18:36.247 ************************************ 00:18:36.247 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:36.247 * Looking for test storage... 00:18:36.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.247 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.247 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.247 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.247 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.247 09:31:01 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.247 09:31:01 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.247 09:31:01 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.247 09:31:01 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.248 09:31:01 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.248 --rc genhtml_branch_coverage=1 00:18:36.248 --rc genhtml_function_coverage=1 00:18:36.248 --rc genhtml_legend=1 00:18:36.248 --rc geninfo_all_blocks=1 00:18:36.248 --rc geninfo_unexecuted_blocks=1 00:18:36.248 00:18:36.248 ' 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.248 --rc genhtml_branch_coverage=1 00:18:36.248 --rc genhtml_function_coverage=1 00:18:36.248 --rc genhtml_legend=1 00:18:36.248 --rc geninfo_all_blocks=1 00:18:36.248 --rc geninfo_unexecuted_blocks=1 00:18:36.248 00:18:36.248 ' 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.248 --rc genhtml_branch_coverage=1 00:18:36.248 --rc genhtml_function_coverage=1 00:18:36.248 --rc genhtml_legend=1 00:18:36.248 --rc geninfo_all_blocks=1 00:18:36.248 --rc geninfo_unexecuted_blocks=1 00:18:36.248 00:18:36.248 ' 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.248 --rc genhtml_branch_coverage=1 00:18:36.248 --rc genhtml_function_coverage=1 00:18:36.248 --rc genhtml_legend=1 00:18:36.248 --rc geninfo_all_blocks=1 00:18:36.248 --rc geninfo_unexecuted_blocks=1 00:18:36.248 00:18:36.248 ' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Xp9OjotNxB 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74468 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74468 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74468 ']' 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.248 09:31:01 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:36.248 09:31:01 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.248 [2024-11-20 09:31:01.690458] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:36.248 [2024-11-20 09:31:01.690603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74468 ] 00:18:36.518 [2024-11-20 09:31:01.850045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.518 [2024-11-20 09:31:01.949669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.083 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.083 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:18:37.083 09:31:02 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:37.083 09:31:02 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.083 09:31:02 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:37.083 09:31:02 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:37.083 09:31:02 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:37.083 09:31:02 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:37.649 09:31:02 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:37.649 09:31:02 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:37.649 09:31:02 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:37.649 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:37.649 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:37.649 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:37.649 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:37.649 09:31:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:37.649 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:37.649 { 00:18:37.649 "name": "nvme0n1", 00:18:37.649 "aliases": [ 00:18:37.650 "767f3d1b-f8b7-4914-8fb0-c98bb0e8346a" 00:18:37.650 ], 00:18:37.650 "product_name": "NVMe disk", 00:18:37.650 "block_size": 4096, 00:18:37.650 "num_blocks": 1310720, 00:18:37.650 "uuid": "767f3d1b-f8b7-4914-8fb0-c98bb0e8346a", 00:18:37.650 "numa_id": -1, 00:18:37.650 "assigned_rate_limits": { 00:18:37.650 "rw_ios_per_sec": 0, 00:18:37.650 "rw_mbytes_per_sec": 0, 00:18:37.650 "r_mbytes_per_sec": 0, 00:18:37.650 "w_mbytes_per_sec": 0 00:18:37.650 }, 00:18:37.650 "claimed": true, 00:18:37.650 "claim_type": "read_many_write_one", 00:18:37.650 "zoned": false, 00:18:37.650 "supported_io_types": { 00:18:37.650 "read": true, 00:18:37.650 "write": true, 00:18:37.650 "unmap": true, 00:18:37.650 "flush": true, 00:18:37.650 "reset": true, 00:18:37.650 "nvme_admin": true, 00:18:37.650 "nvme_io": true, 00:18:37.650 "nvme_io_md": false, 00:18:37.650 "write_zeroes": true, 00:18:37.650 "zcopy": false, 00:18:37.650 "get_zone_info": false, 00:18:37.650 "zone_management": false, 00:18:37.650 "zone_append": false, 00:18:37.650 "compare": true, 00:18:37.650 "compare_and_write": false, 00:18:37.650 "abort": true, 00:18:37.650 "seek_hole": false, 00:18:37.650 "seek_data": false, 00:18:37.650 "copy": true, 00:18:37.650 "nvme_iov_md": false 00:18:37.650 }, 00:18:37.650 "driver_specific": { 00:18:37.650 "nvme": [ 00:18:37.650 { 00:18:37.650 "pci_address": "0000:00:11.0", 00:18:37.650 "trid": { 00:18:37.650 "trtype": "PCIe", 00:18:37.650 "traddr": "0000:00:11.0" 00:18:37.650 }, 00:18:37.650 "ctrlr_data": { 00:18:37.650 "cntlid": 0, 00:18:37.650 "vendor_id": "0x1b36", 00:18:37.650 "model_number": "QEMU NVMe Ctrl", 00:18:37.650 "serial_number": "12341", 00:18:37.650 "firmware_revision": "8.0.0", 00:18:37.650 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:37.650 "oacs": { 00:18:37.650 "security": 0, 00:18:37.650 "format": 1, 00:18:37.650 "firmware": 0, 00:18:37.650 "ns_manage": 1 00:18:37.650 }, 00:18:37.650 "multi_ctrlr": false, 00:18:37.650 "ana_reporting": false 00:18:37.650 }, 00:18:37.650 "vs": { 00:18:37.650 "nvme_version": "1.4" 00:18:37.650 }, 00:18:37.650 "ns_data": { 00:18:37.650 "id": 1, 00:18:37.650 "can_share": false 00:18:37.650 } 00:18:37.650 } 00:18:37.650 ], 00:18:37.650 "mp_policy": "active_passive" 00:18:37.650 } 00:18:37.650 } 00:18:37.650 ]' 00:18:37.650 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:37.650 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:37.650 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:37.650 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:37.650 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:37.650 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:18:37.650 09:31:03 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:37.650 09:31:03 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:37.650 09:31:03 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:37.650 09:31:03 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:37.650 09:31:03 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.216 09:31:03 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ce1061f0-b198-4fbe-ba08-bb3364fe8312 00:18:38.216 09:31:03 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.216 09:31:03 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce1061f0-b198-4fbe-ba08-bb3364fe8312 00:18:38.216 09:31:03 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:38.473 09:31:03 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c8603e5a-d534-462e-8d18-f46cca709d06 00:18:38.473 09:31:03 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c8603e5a-d534-462e-8d18-f46cca709d06 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:38.731 09:31:03 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:38.731 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:38.731 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:38.731 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:38.731 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:38.731 09:31:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:38.989 { 00:18:38.989 "name": "05ca9046-d490-4a8d-80f4-d9f5cab8d207", 00:18:38.989 "aliases": [ 00:18:38.989 "lvs/nvme0n1p0" 00:18:38.989 ], 00:18:38.989 "product_name": "Logical Volume", 00:18:38.989 "block_size": 4096, 00:18:38.989 "num_blocks": 26476544, 00:18:38.989 "uuid": "05ca9046-d490-4a8d-80f4-d9f5cab8d207", 00:18:38.989 "assigned_rate_limits": { 00:18:38.989 "rw_ios_per_sec": 0, 00:18:38.989 "rw_mbytes_per_sec": 0, 00:18:38.989 "r_mbytes_per_sec": 0, 00:18:38.989 "w_mbytes_per_sec": 0 00:18:38.989 }, 00:18:38.989 "claimed": false, 00:18:38.989 "zoned": false, 00:18:38.989 "supported_io_types": { 00:18:38.989 "read": true, 00:18:38.989 "write": true, 00:18:38.989 "unmap": true, 00:18:38.989 "flush": false, 00:18:38.989 "reset": true, 00:18:38.989 "nvme_admin": false, 00:18:38.989 "nvme_io": false, 00:18:38.989 "nvme_io_md": false, 00:18:38.989 "write_zeroes": true, 00:18:38.989 "zcopy": false, 00:18:38.989 "get_zone_info": false, 00:18:38.989 "zone_management": false, 00:18:38.989 "zone_append": false, 00:18:38.989 "compare": false, 00:18:38.989 "compare_and_write": false, 00:18:38.989 "abort": false, 00:18:38.989 "seek_hole": true, 00:18:38.989 "seek_data": true, 00:18:38.989 "copy": false, 00:18:38.989 "nvme_iov_md": false 00:18:38.989 }, 00:18:38.989 "driver_specific": { 00:18:38.989 "lvol": { 00:18:38.989 "lvol_store_uuid": "c8603e5a-d534-462e-8d18-f46cca709d06", 00:18:38.989 "base_bdev": "nvme0n1", 00:18:38.989 "thin_provision": true, 00:18:38.989 "num_allocated_clusters": 0, 00:18:38.989 "snapshot": false, 00:18:38.989 "clone": false, 00:18:38.989 "esnap_clone": false 00:18:38.989 } 00:18:38.989 } 00:18:38.989 } 00:18:38.989 ]' 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:38.989 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:38.989 09:31:04 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:38.989 09:31:04 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:38.989 09:31:04 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.247 09:31:04 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.247 09:31:04 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.247 09:31:04 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:39.247 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:39.247 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.247 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:39.247 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:39.247 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.505 { 00:18:39.505 "name": "05ca9046-d490-4a8d-80f4-d9f5cab8d207", 00:18:39.505 "aliases": [ 00:18:39.505 "lvs/nvme0n1p0" 00:18:39.505 ], 00:18:39.505 "product_name": "Logical Volume", 00:18:39.505 "block_size": 4096, 00:18:39.505 "num_blocks": 26476544, 00:18:39.505 "uuid": "05ca9046-d490-4a8d-80f4-d9f5cab8d207", 00:18:39.505 "assigned_rate_limits": { 00:18:39.505 "rw_ios_per_sec": 0, 00:18:39.505 "rw_mbytes_per_sec": 0, 00:18:39.505 "r_mbytes_per_sec": 0, 00:18:39.505 "w_mbytes_per_sec": 0 00:18:39.505 }, 00:18:39.505 "claimed": false, 00:18:39.505 "zoned": false, 00:18:39.505 "supported_io_types": { 00:18:39.505 "read": true, 00:18:39.505 "write": true, 00:18:39.505 "unmap": true, 00:18:39.505 "flush": false, 00:18:39.505 "reset": true, 00:18:39.505 "nvme_admin": false, 00:18:39.505 "nvme_io": false, 00:18:39.505 "nvme_io_md": false, 00:18:39.505 "write_zeroes": true, 00:18:39.505 "zcopy": false, 00:18:39.505 "get_zone_info": false, 00:18:39.505 "zone_management": false, 00:18:39.505 "zone_append": false, 00:18:39.505 "compare": false, 00:18:39.505 "compare_and_write": false, 00:18:39.505 "abort": false, 00:18:39.505 "seek_hole": true, 00:18:39.505 "seek_data": true, 00:18:39.505 "copy": false, 00:18:39.505 "nvme_iov_md": false 00:18:39.505 }, 00:18:39.505 "driver_specific": { 00:18:39.505 "lvol": { 00:18:39.505 "lvol_store_uuid": "c8603e5a-d534-462e-8d18-f46cca709d06", 00:18:39.505 "base_bdev": "nvme0n1", 00:18:39.505 "thin_provision": true, 00:18:39.505 "num_allocated_clusters": 0, 00:18:39.505 "snapshot": false, 00:18:39.505 "clone": false, 00:18:39.505 "esnap_clone": false 00:18:39.505 } 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ]' 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:39.505 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:39.505 09:31:04 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:39.505 09:31:04 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:39.763 09:31:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:39.763 09:31:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:39.763 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:39.763 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.763 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:39.763 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:39.763 09:31:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05ca9046-d490-4a8d-80f4-d9f5cab8d207 00:18:39.763 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.763 { 00:18:39.763 "name": "05ca9046-d490-4a8d-80f4-d9f5cab8d207", 00:18:39.763 "aliases": [ 00:18:39.763 "lvs/nvme0n1p0" 00:18:39.763 ], 00:18:39.763 "product_name": "Logical Volume", 00:18:39.763 "block_size": 4096, 00:18:39.763 "num_blocks": 26476544, 00:18:39.763 "uuid": "05ca9046-d490-4a8d-80f4-d9f5cab8d207", 00:18:39.763 "assigned_rate_limits": { 00:18:39.763 "rw_ios_per_sec": 0, 00:18:39.763 "rw_mbytes_per_sec": 0, 00:18:39.763 "r_mbytes_per_sec": 0, 00:18:39.763 "w_mbytes_per_sec": 0 00:18:39.763 }, 00:18:39.763 "claimed": false, 00:18:39.763 "zoned": false, 00:18:39.763 "supported_io_types": { 00:18:39.763 "read": true, 00:18:39.763 "write": true, 00:18:39.763 "unmap": true, 00:18:39.763 "flush": false, 00:18:39.763 "reset": true, 00:18:39.763 "nvme_admin": false, 00:18:39.763 "nvme_io": false, 00:18:39.763 "nvme_io_md": false, 00:18:39.763 "write_zeroes": true, 00:18:39.763 "zcopy": false, 00:18:39.763 "get_zone_info": false, 00:18:39.763 "zone_management": false, 00:18:39.763 "zone_append": false, 00:18:39.763 "compare": false, 00:18:39.763 "compare_and_write": false, 00:18:39.763 "abort": false, 00:18:39.763 "seek_hole": true, 00:18:39.763 "seek_data": true, 00:18:39.763 "copy": false, 00:18:39.763 "nvme_iov_md": false 00:18:39.763 }, 00:18:39.763 "driver_specific": { 00:18:39.763 "lvol": { 00:18:39.763 "lvol_store_uuid": "c8603e5a-d534-462e-8d18-f46cca709d06", 00:18:39.763 "base_bdev": "nvme0n1", 00:18:39.763 "thin_provision": true, 00:18:39.763 "num_allocated_clusters": 0, 00:18:39.764 "snapshot": false, 00:18:39.764 "clone": false, 00:18:39.764 "esnap_clone": false 00:18:39.764 } 00:18:39.764 } 00:18:39.764 } 00:18:39.764 ]' 00:18:39.764 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.042 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.042 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.042 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:40.042 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:40.042 09:31:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 05ca9046-d490-4a8d-80f4-d9f5cab8d207 --l2p_dram_limit 10' 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:40.042 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:40.042 09:31:05 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 05ca9046-d490-4a8d-80f4-d9f5cab8d207 --l2p_dram_limit 10 -c nvc0n1p0 00:18:40.042 [2024-11-20 09:31:05.447557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.447599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.042 [2024-11-20 09:31:05.447613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.042 [2024-11-20 09:31:05.447621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.447669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.447676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.042 [2024-11-20 09:31:05.447684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:40.042 [2024-11-20 09:31:05.447690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.447710] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.042 [2024-11-20 09:31:05.448334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.042 [2024-11-20 09:31:05.448360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.448367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.042 [2024-11-20 09:31:05.448375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:18:40.042 [2024-11-20 09:31:05.448381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.448437] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8b0e7245-2226-4a2b-958f-fecff4d7a024 00:18:40.042 [2024-11-20 09:31:05.449397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.449426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:40.042 [2024-11-20 09:31:05.449434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:40.042 [2024-11-20 09:31:05.449441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.454251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.454280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.042 [2024-11-20 09:31:05.454290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:18:40.042 [2024-11-20 09:31:05.454310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.454378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.454387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.042 [2024-11-20 09:31:05.454394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:40.042 [2024-11-20 09:31:05.454403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.454440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.454449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.042 [2024-11-20 09:31:05.454455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.042 [2024-11-20 09:31:05.454464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.454481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.042 [2024-11-20 09:31:05.457379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.457406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.042 [2024-11-20 09:31:05.457416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.902 ms 00:18:40.042 [2024-11-20 09:31:05.457423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.457450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.457457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.042 [2024-11-20 09:31:05.457465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:40.042 [2024-11-20 09:31:05.457471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.457485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:40.042 [2024-11-20 09:31:05.457593] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.042 [2024-11-20 09:31:05.457606] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.042 [2024-11-20 09:31:05.457615] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.042 [2024-11-20 09:31:05.457624] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.042 [2024-11-20 09:31:05.457632] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.042 [2024-11-20 09:31:05.457639] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.042 [2024-11-20 09:31:05.457645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.042 [2024-11-20 09:31:05.457654] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.042 [2024-11-20 09:31:05.457660] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.042 [2024-11-20 09:31:05.457667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.457673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.042 [2024-11-20 09:31:05.457681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:18:40.042 [2024-11-20 09:31:05.457692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.457759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.042 [2024-11-20 09:31:05.457772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.042 [2024-11-20 09:31:05.457779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:40.042 [2024-11-20 09:31:05.457785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.042 [2024-11-20 09:31:05.457865] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.042 [2024-11-20 09:31:05.457872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.042 [2024-11-20 09:31:05.457880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.042 [2024-11-20 09:31:05.457887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.042 [2024-11-20 09:31:05.457894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.042 [2024-11-20 09:31:05.457899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.042 [2024-11-20 09:31:05.457906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.042 [2024-11-20 09:31:05.457912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.042 [2024-11-20 09:31:05.457918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.042 [2024-11-20 09:31:05.457924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.042 [2024-11-20 09:31:05.457930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.042 [2024-11-20 09:31:05.457935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.042 [2024-11-20 09:31:05.457942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.042 [2024-11-20 09:31:05.457947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.043 [2024-11-20 09:31:05.457955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:40.043 [2024-11-20 09:31:05.457959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.043 [2024-11-20 09:31:05.457968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.043 [2024-11-20 09:31:05.457973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:40.043 [2024-11-20 09:31:05.457981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.043 [2024-11-20 09:31:05.457986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.043 [2024-11-20 09:31:05.457993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.043 [2024-11-20 09:31:05.457998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.043 [2024-11-20 09:31:05.458005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.043 [2024-11-20 09:31:05.458010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.043 [2024-11-20 09:31:05.458021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.043 [2024-11-20 09:31:05.458027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.043 [2024-11-20 09:31:05.458039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.043 [2024-11-20 09:31:05.458044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.043 [2024-11-20 09:31:05.458055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.043 [2024-11-20 09:31:05.458062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.043 [2024-11-20 09:31:05.458074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.043 [2024-11-20 09:31:05.458079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:40.043 [2024-11-20 09:31:05.458085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.043 [2024-11-20 09:31:05.458090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.043 [2024-11-20 09:31:05.458097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:40.043 [2024-11-20 09:31:05.458103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.043 [2024-11-20 09:31:05.458115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:40.043 [2024-11-20 09:31:05.458121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458127] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.043 [2024-11-20 09:31:05.458134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.043 [2024-11-20 09:31:05.458139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.043 [2024-11-20 09:31:05.458147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.043 [2024-11-20 09:31:05.458153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.043 [2024-11-20 09:31:05.458161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.043 [2024-11-20 09:31:05.458167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.043 [2024-11-20 09:31:05.458173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.043 [2024-11-20 09:31:05.458179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.043 [2024-11-20 09:31:05.458185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.043 [2024-11-20 09:31:05.458193] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.043 [2024-11-20 09:31:05.458201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.043 [2024-11-20 09:31:05.458216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:40.043 [2024-11-20 09:31:05.458222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:40.043 [2024-11-20 09:31:05.458228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:40.043 [2024-11-20 09:31:05.458234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:40.043 [2024-11-20 09:31:05.458241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:40.043 [2024-11-20 09:31:05.458246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:40.043 [2024-11-20 09:31:05.458253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:40.043 [2024-11-20 09:31:05.458258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:40.043 [2024-11-20 09:31:05.458266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:40.043 [2024-11-20 09:31:05.458296] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.043 [2024-11-20 09:31:05.458316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.043 [2024-11-20 09:31:05.458330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.043 [2024-11-20 09:31:05.458336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.043 [2024-11-20 09:31:05.458343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.043 [2024-11-20 09:31:05.458349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.043 [2024-11-20 09:31:05.458356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.043 [2024-11-20 09:31:05.458362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:18:40.043 [2024-11-20 09:31:05.458369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.043 [2024-11-20 09:31:05.458411] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:40.043 [2024-11-20 09:31:05.458422] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:42.573 [2024-11-20 09:31:07.581780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.581842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:42.573 [2024-11-20 09:31:07.581857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2123.360 ms 00:18:42.573 [2024-11-20 09:31:07.581869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.615070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.615126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.573 [2024-11-20 09:31:07.615139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.688 ms 00:18:42.573 [2024-11-20 09:31:07.615149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.615316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.615331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:42.573 [2024-11-20 09:31:07.615340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:42.573 [2024-11-20 09:31:07.615351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.645894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.645934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.573 [2024-11-20 09:31:07.645945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.502 ms 00:18:42.573 [2024-11-20 09:31:07.645955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.645991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.646005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.573 [2024-11-20 09:31:07.646013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:42.573 [2024-11-20 09:31:07.646021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.646392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.646417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.573 [2024-11-20 09:31:07.646427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:42.573 [2024-11-20 09:31:07.646436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.646546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.646567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.573 [2024-11-20 09:31:07.646577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:42.573 [2024-11-20 09:31:07.646588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.660497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.660533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.573 [2024-11-20 09:31:07.660543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.891 ms 00:18:42.573 [2024-11-20 09:31:07.660552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.671793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:42.573 [2024-11-20 09:31:07.674449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.674475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:42.573 [2024-11-20 09:31:07.674486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.818 ms 00:18:42.573 [2024-11-20 09:31:07.674494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.738851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.738905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:42.573 [2024-11-20 09:31:07.738921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.325 ms 00:18:42.573 [2024-11-20 09:31:07.738929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.739109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.739120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:42.573 [2024-11-20 09:31:07.739133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:18:42.573 [2024-11-20 09:31:07.739140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.762551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.762601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:42.573 [2024-11-20 09:31:07.762615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.364 ms 00:18:42.573 [2024-11-20 09:31:07.762623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.785668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.785714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:42.573 [2024-11-20 09:31:07.785728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.999 ms 00:18:42.573 [2024-11-20 09:31:07.785736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.786322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.786345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:42.573 [2024-11-20 09:31:07.786357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:18:42.573 [2024-11-20 09:31:07.786366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.853201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.853249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:42.573 [2024-11-20 09:31:07.853267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.798 ms 00:18:42.573 [2024-11-20 09:31:07.853277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.877458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.877510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:42.573 [2024-11-20 09:31:07.877524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:18:42.573 [2024-11-20 09:31:07.877533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.901164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.901207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:42.573 [2024-11-20 09:31:07.901221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.597 ms 00:18:42.573 [2024-11-20 09:31:07.901228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.924258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.924313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:42.573 [2024-11-20 09:31:07.924328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.002 ms 00:18:42.573 [2024-11-20 09:31:07.924336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.924364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.924373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:42.573 [2024-11-20 09:31:07.924385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:42.573 [2024-11-20 09:31:07.924393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.924472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.573 [2024-11-20 09:31:07.924484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:42.573 [2024-11-20 09:31:07.924493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:42.573 [2024-11-20 09:31:07.924501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.573 [2024-11-20 09:31:07.925532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2477.541 ms, result 0 00:18:42.573 { 00:18:42.573 "name": "ftl0", 00:18:42.573 "uuid": "8b0e7245-2226-4a2b-958f-fecff4d7a024" 00:18:42.573 } 00:18:42.573 09:31:07 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:42.573 09:31:07 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:42.831 09:31:08 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:42.831 09:31:08 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:43.090 [2024-11-20 09:31:08.324970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.325029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.090 [2024-11-20 09:31:08.325042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.090 [2024-11-20 09:31:08.325057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.325080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.090 [2024-11-20 09:31:08.327749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.327784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.090 [2024-11-20 09:31:08.327796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:18:43.090 [2024-11-20 09:31:08.327803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.328081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.328155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.090 [2024-11-20 09:31:08.328167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:43.090 [2024-11-20 09:31:08.328174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.331439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.331465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.090 [2024-11-20 09:31:08.331479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:18:43.090 [2024-11-20 09:31:08.331487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.337685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.337715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.090 [2024-11-20 09:31:08.337728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:18:43.090 [2024-11-20 09:31:08.337736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.361676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.361715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.090 [2024-11-20 09:31:08.361729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.883 ms 00:18:43.090 [2024-11-20 09:31:08.361736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.376771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.376813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.090 [2024-11-20 09:31:08.376827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.990 ms 00:18:43.090 [2024-11-20 09:31:08.376835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.376991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.377008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.090 [2024-11-20 09:31:08.377019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:43.090 [2024-11-20 09:31:08.377028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.401261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.401305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:43.090 [2024-11-20 09:31:08.401319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.209 ms 00:18:43.090 [2024-11-20 09:31:08.401326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.424271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.424319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:43.090 [2024-11-20 09:31:08.424333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:18:43.090 [2024-11-20 09:31:08.424340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.447505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.447546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.090 [2024-11-20 09:31:08.447558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.120 ms 00:18:43.090 [2024-11-20 09:31:08.447566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.470128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.090 [2024-11-20 09:31:08.470169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.090 [2024-11-20 09:31:08.470183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.483 ms 00:18:43.090 [2024-11-20 09:31:08.470191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.090 [2024-11-20 09:31:08.470229] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.090 [2024-11-20 09:31:08.470243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.090 [2024-11-20 09:31:08.470257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.090 [2024-11-20 09:31:08.470265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.090 [2024-11-20 09:31:08.470274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.470997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.091 [2024-11-20 09:31:08.471070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.092 [2024-11-20 09:31:08.471145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.092 [2024-11-20 09:31:08.471154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b0e7245-2226-4a2b-958f-fecff4d7a024 00:18:43.092 [2024-11-20 09:31:08.471161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.092 [2024-11-20 09:31:08.471172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.092 [2024-11-20 09:31:08.471181] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.092 [2024-11-20 09:31:08.471190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.092 [2024-11-20 09:31:08.471197] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.092 [2024-11-20 09:31:08.471205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.092 [2024-11-20 09:31:08.471212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.092 [2024-11-20 09:31:08.471220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.092 [2024-11-20 09:31:08.471227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.092 [2024-11-20 09:31:08.471236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.092 [2024-11-20 09:31:08.471243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.092 [2024-11-20 09:31:08.471252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:18:43.092 [2024-11-20 09:31:08.471261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-11-20 09:31:08.483801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.092 [2024-11-20 09:31:08.483838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.092 [2024-11-20 09:31:08.483851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.504 ms 00:18:43.092 [2024-11-20 09:31:08.483859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-11-20 09:31:08.484206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.092 [2024-11-20 09:31:08.484221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.092 [2024-11-20 09:31:08.484234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:43.092 [2024-11-20 09:31:08.484241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-11-20 09:31:08.526604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.092 [2024-11-20 09:31:08.526658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.092 [2024-11-20 09:31:08.526674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.092 [2024-11-20 09:31:08.526682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-11-20 09:31:08.526755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.092 [2024-11-20 09:31:08.526765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.092 [2024-11-20 09:31:08.526777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.092 [2024-11-20 09:31:08.526784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-11-20 09:31:08.526883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.092 [2024-11-20 09:31:08.526894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.092 [2024-11-20 09:31:08.526903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.092 [2024-11-20 09:31:08.526910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.092 [2024-11-20 09:31:08.526931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.092 [2024-11-20 09:31:08.526938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.092 [2024-11-20 09:31:08.526947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.092 [2024-11-20 09:31:08.526956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.607058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.607117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.349 [2024-11-20 09:31:08.607132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.607140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.670652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.670702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.349 [2024-11-20 09:31:08.670714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.670726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.670804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.670814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.349 [2024-11-20 09:31:08.670823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.670831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.670891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.670901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.349 [2024-11-20 09:31:08.670910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.670917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.671006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.671016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.349 [2024-11-20 09:31:08.671026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.671033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.671071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.671086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:43.349 [2024-11-20 09:31:08.671095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.671102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.671140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.671153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.349 [2024-11-20 09:31:08.671163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.671170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.671214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.349 [2024-11-20 09:31:08.671228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.349 [2024-11-20 09:31:08.671238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.349 [2024-11-20 09:31:08.671245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.349 [2024-11-20 09:31:08.671383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.387 ms, result 0 00:18:43.349 true 00:18:43.349 09:31:08 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74468 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74468 ']' 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74468 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74468 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.349 killing process with pid 74468 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74468' 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74468 00:18:43.349 09:31:08 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74468 00:18:49.903 09:31:14 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:53.212 262144+0 records in 00:18:53.212 262144+0 records out 00:18:53.212 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.95358 s, 272 MB/s 00:18:53.212 09:31:18 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:55.737 09:31:20 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:55.737 [2024-11-20 09:31:20.846526] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:55.737 [2024-11-20 09:31:20.846800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74686 ] 00:18:55.737 [2024-11-20 09:31:21.005980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.737 [2024-11-20 09:31:21.106880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.995 [2024-11-20 09:31:21.361594] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:55.995 [2024-11-20 09:31:21.361659] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.253 [2024-11-20 09:31:21.515264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.515328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:56.253 [2024-11-20 09:31:21.515343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:56.253 [2024-11-20 09:31:21.515351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.515389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.515398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.253 [2024-11-20 09:31:21.515406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:56.253 [2024-11-20 09:31:21.515412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.515427] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:56.253 [2024-11-20 09:31:21.515970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:56.253 [2024-11-20 09:31:21.515986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.515992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.253 [2024-11-20 09:31:21.516003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:18:56.253 [2024-11-20 09:31:21.516009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.517015] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:56.253 [2024-11-20 09:31:21.526546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.526574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:56.253 [2024-11-20 09:31:21.526583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.532 ms 00:18:56.253 [2024-11-20 09:31:21.526589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.526637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.526644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:56.253 [2024-11-20 09:31:21.526650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:56.253 [2024-11-20 09:31:21.526656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.531263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.531289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.253 [2024-11-20 09:31:21.531296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:18:56.253 [2024-11-20 09:31:21.531321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.531378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.531384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.253 [2024-11-20 09:31:21.531390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:56.253 [2024-11-20 09:31:21.531396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.531429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.531436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:56.253 [2024-11-20 09:31:21.531442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:56.253 [2024-11-20 09:31:21.531448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.531462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:56.253 [2024-11-20 09:31:21.534029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.534052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.253 [2024-11-20 09:31:21.534060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:18:56.253 [2024-11-20 09:31:21.534067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.534094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.534101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:56.253 [2024-11-20 09:31:21.534107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:56.253 [2024-11-20 09:31:21.534112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.253 [2024-11-20 09:31:21.534126] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:56.253 [2024-11-20 09:31:21.534139] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:56.253 [2024-11-20 09:31:21.534166] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:56.253 [2024-11-20 09:31:21.534180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:56.253 [2024-11-20 09:31:21.534265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:56.253 [2024-11-20 09:31:21.534273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:56.253 [2024-11-20 09:31:21.534281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:56.253 [2024-11-20 09:31:21.534289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:56.253 [2024-11-20 09:31:21.534296] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:56.253 [2024-11-20 09:31:21.534316] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:56.253 [2024-11-20 09:31:21.534322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:56.253 [2024-11-20 09:31:21.534328] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:56.253 [2024-11-20 09:31:21.534336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:56.253 [2024-11-20 09:31:21.534344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.253 [2024-11-20 09:31:21.534357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:56.253 [2024-11-20 09:31:21.534364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:18:56.253 [2024-11-20 09:31:21.534369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.534445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.254 [2024-11-20 09:31:21.534454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:56.254 [2024-11-20 09:31:21.534461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:56.254 [2024-11-20 09:31:21.534470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.534563] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:56.254 [2024-11-20 09:31:21.534573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:56.254 [2024-11-20 09:31:21.534579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:56.254 [2024-11-20 09:31:21.534596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:56.254 [2024-11-20 09:31:21.534612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.254 [2024-11-20 09:31:21.534623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:56.254 [2024-11-20 09:31:21.534627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:56.254 [2024-11-20 09:31:21.534634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.254 [2024-11-20 09:31:21.534639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:56.254 [2024-11-20 09:31:21.534645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:56.254 [2024-11-20 09:31:21.534655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:56.254 [2024-11-20 09:31:21.534665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:56.254 [2024-11-20 09:31:21.534679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:56.254 [2024-11-20 09:31:21.534694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:56.254 [2024-11-20 09:31:21.534709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:56.254 [2024-11-20 09:31:21.534723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:56.254 [2024-11-20 09:31:21.534738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.254 [2024-11-20 09:31:21.534748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:56.254 [2024-11-20 09:31:21.534752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:56.254 [2024-11-20 09:31:21.534757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.254 [2024-11-20 09:31:21.534762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:56.254 [2024-11-20 09:31:21.534767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:56.254 [2024-11-20 09:31:21.534772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:56.254 [2024-11-20 09:31:21.534781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:56.254 [2024-11-20 09:31:21.534786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534791] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:56.254 [2024-11-20 09:31:21.534797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:56.254 [2024-11-20 09:31:21.534803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.254 [2024-11-20 09:31:21.534814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:56.254 [2024-11-20 09:31:21.534819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:56.254 [2024-11-20 09:31:21.534824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:56.254 [2024-11-20 09:31:21.534829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:56.254 [2024-11-20 09:31:21.534835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:56.254 [2024-11-20 09:31:21.534839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:56.254 [2024-11-20 09:31:21.534846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:56.254 [2024-11-20 09:31:21.534852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:56.254 [2024-11-20 09:31:21.534864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:56.254 [2024-11-20 09:31:21.534870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:56.254 [2024-11-20 09:31:21.534875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:56.254 [2024-11-20 09:31:21.534880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:56.254 [2024-11-20 09:31:21.534886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:56.254 [2024-11-20 09:31:21.534891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:56.254 [2024-11-20 09:31:21.534896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:56.254 [2024-11-20 09:31:21.534901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:56.254 [2024-11-20 09:31:21.534906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:56.254 [2024-11-20 09:31:21.534933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:56.254 [2024-11-20 09:31:21.534941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:56.254 [2024-11-20 09:31:21.534952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:56.254 [2024-11-20 09:31:21.534957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:56.254 [2024-11-20 09:31:21.534962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:56.254 [2024-11-20 09:31:21.534968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.254 [2024-11-20 09:31:21.534975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:56.254 [2024-11-20 09:31:21.534981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:18:56.254 [2024-11-20 09:31:21.534986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.556008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.254 [2024-11-20 09:31:21.556146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.254 [2024-11-20 09:31:21.556161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.990 ms 00:18:56.254 [2024-11-20 09:31:21.556169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.556237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.254 [2024-11-20 09:31:21.556244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.254 [2024-11-20 09:31:21.556251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:56.254 [2024-11-20 09:31:21.556256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.591786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.254 [2024-11-20 09:31:21.591831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.254 [2024-11-20 09:31:21.591842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.487 ms 00:18:56.254 [2024-11-20 09:31:21.591848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.591895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.254 [2024-11-20 09:31:21.591902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.254 [2024-11-20 09:31:21.591909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:56.254 [2024-11-20 09:31:21.591917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.254 [2024-11-20 09:31:21.592258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.592272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.255 [2024-11-20 09:31:21.592279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:56.255 [2024-11-20 09:31:21.592285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.592403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.592411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.255 [2024-11-20 09:31:21.592418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:56.255 [2024-11-20 09:31:21.592424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.603191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.603372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.255 [2024-11-20 09:31:21.603386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.748 ms 00:18:56.255 [2024-11-20 09:31:21.603396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.613133] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:56.255 [2024-11-20 09:31:21.613163] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:56.255 [2024-11-20 09:31:21.613174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.613180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:56.255 [2024-11-20 09:31:21.613188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.687 ms 00:18:56.255 [2024-11-20 09:31:21.613193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.631833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.631863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:56.255 [2024-11-20 09:31:21.631876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.609 ms 00:18:56.255 [2024-11-20 09:31:21.631882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.640699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.640736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:56.255 [2024-11-20 09:31:21.640744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:18:56.255 [2024-11-20 09:31:21.640749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.649518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.649545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:56.255 [2024-11-20 09:31:21.649553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.740 ms 00:18:56.255 [2024-11-20 09:31:21.649558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.650026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.650046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:56.255 [2024-11-20 09:31:21.650055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:56.255 [2024-11-20 09:31:21.650062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.694179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.255 [2024-11-20 09:31:21.694351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:56.255 [2024-11-20 09:31:21.694368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.100 ms 00:18:56.255 [2024-11-20 09:31:21.694380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.255 [2024-11-20 09:31:21.702551] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:56.511 [2024-11-20 09:31:21.705034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.705065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:56.511 [2024-11-20 09:31:21.705075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.618 ms 00:18:56.511 [2024-11-20 09:31:21.705082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.705167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.705176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:56.511 [2024-11-20 09:31:21.705183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:56.511 [2024-11-20 09:31:21.705189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.705245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.705254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:56.511 [2024-11-20 09:31:21.705261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:56.511 [2024-11-20 09:31:21.705267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.705283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.705290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:56.511 [2024-11-20 09:31:21.705296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:56.511 [2024-11-20 09:31:21.705320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.705347] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:56.511 [2024-11-20 09:31:21.705355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.705363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:56.511 [2024-11-20 09:31:21.705370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:56.511 [2024-11-20 09:31:21.705377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.725382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.725427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:56.511 [2024-11-20 09:31:21.725439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.989 ms 00:18:56.511 [2024-11-20 09:31:21.725445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.725517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.511 [2024-11-20 09:31:21.725524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:56.511 [2024-11-20 09:31:21.725531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:56.511 [2024-11-20 09:31:21.725537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.511 [2024-11-20 09:31:21.726346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 210.704 ms, result 0 00:18:57.441  [2024-11-20T09:31:23.849Z] Copying: 48/1024 [MB] (48 MBps) [2024-11-20T09:31:24.782Z] Copying: 94/1024 [MB] (46 MBps) [2024-11-20T09:31:26.153Z] Copying: 140/1024 [MB] (45 MBps) [2024-11-20T09:31:26.745Z] Copying: 186/1024 [MB] (45 MBps) [2024-11-20T09:31:28.117Z] Copying: 234/1024 [MB] (47 MBps) [2024-11-20T09:31:29.049Z] Copying: 278/1024 [MB] (44 MBps) [2024-11-20T09:31:29.981Z] Copying: 323/1024 [MB] (44 MBps) [2024-11-20T09:31:30.912Z] Copying: 370/1024 [MB] (46 MBps) [2024-11-20T09:31:31.843Z] Copying: 409/1024 [MB] (38 MBps) [2024-11-20T09:31:32.773Z] Copying: 454/1024 [MB] (45 MBps) [2024-11-20T09:31:34.145Z] Copying: 499/1024 [MB] (44 MBps) [2024-11-20T09:31:35.078Z] Copying: 545/1024 [MB] (46 MBps) [2024-11-20T09:31:36.018Z] Copying: 591/1024 [MB] (45 MBps) [2024-11-20T09:31:36.950Z] Copying: 636/1024 [MB] (45 MBps) [2024-11-20T09:31:37.881Z] Copying: 682/1024 [MB] (45 MBps) [2024-11-20T09:31:38.814Z] Copying: 735/1024 [MB] (53 MBps) [2024-11-20T09:31:39.746Z] Copying: 782/1024 [MB] (46 MBps) [2024-11-20T09:31:41.116Z] Copying: 830/1024 [MB] (47 MBps) [2024-11-20T09:31:42.049Z] Copying: 876/1024 [MB] (45 MBps) [2024-11-20T09:31:42.983Z] Copying: 922/1024 [MB] (46 MBps) [2024-11-20T09:31:43.915Z] Copying: 968/1024 [MB] (46 MBps) [2024-11-20T09:31:44.175Z] Copying: 1014/1024 [MB] (45 MBps) [2024-11-20T09:31:44.175Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-11-20 09:31:43.956653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:43.956703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:18.719 [2024-11-20 09:31:43.956717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:18.719 [2024-11-20 09:31:43.956727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:43.956748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:18.719 [2024-11-20 09:31:43.959409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:43.959442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:18.719 [2024-11-20 09:31:43.959453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:19:18.719 [2024-11-20 09:31:43.959462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:43.961015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:43.961049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:18.719 [2024-11-20 09:31:43.961059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:19:18.719 [2024-11-20 09:31:43.961067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:43.973763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:43.973951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:18.719 [2024-11-20 09:31:43.973969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.681 ms 00:19:18.719 [2024-11-20 09:31:43.973978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:43.981116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:43.981267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:18.719 [2024-11-20 09:31:43.981346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.104 ms 00:19:18.719 [2024-11-20 09:31:43.981371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.005207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.005401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:18.719 [2024-11-20 09:31:44.005465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.761 ms 00:19:18.719 [2024-11-20 09:31:44.005490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.019747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.019887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:18.719 [2024-11-20 09:31:44.019966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.207 ms 00:19:18.719 [2024-11-20 09:31:44.019992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.020133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.020160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:18.719 [2024-11-20 09:31:44.020185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:18.719 [2024-11-20 09:31:44.020251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.043396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.043528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:18.719 [2024-11-20 09:31:44.043576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.114 ms 00:19:18.719 [2024-11-20 09:31:44.043598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.066355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.066466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:18.719 [2024-11-20 09:31:44.066541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.717 ms 00:19:18.719 [2024-11-20 09:31:44.066564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.089099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.089207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:18.719 [2024-11-20 09:31:44.089253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.428 ms 00:19:18.719 [2024-11-20 09:31:44.089274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.111484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.719 [2024-11-20 09:31:44.111591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:18.719 [2024-11-20 09:31:44.111654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.141 ms 00:19:18.719 [2024-11-20 09:31:44.111676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.719 [2024-11-20 09:31:44.111715] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:18.719 [2024-11-20 09:31:44.111743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.111804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.111835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.111863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.111914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.111946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.112972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:18.719 [2024-11-20 09:31:44.113431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.113973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:18.720 [2024-11-20 09:31:44.114865] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:18.720 [2024-11-20 09:31:44.114878] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b0e7245-2226-4a2b-958f-fecff4d7a024 00:19:18.720 [2024-11-20 09:31:44.114887] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:18.720 [2024-11-20 09:31:44.114897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:18.720 [2024-11-20 09:31:44.114904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:18.720 [2024-11-20 09:31:44.114911] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:18.720 [2024-11-20 09:31:44.114918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:18.720 [2024-11-20 09:31:44.114926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:18.720 [2024-11-20 09:31:44.114933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:18.720 [2024-11-20 09:31:44.114944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:18.720 [2024-11-20 09:31:44.114951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:18.720 [2024-11-20 09:31:44.114959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.720 [2024-11-20 09:31:44.114967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:18.720 [2024-11-20 09:31:44.114974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:19:18.720 [2024-11-20 09:31:44.114981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.720 [2024-11-20 09:31:44.127332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.720 [2024-11-20 09:31:44.127361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:18.720 [2024-11-20 09:31:44.127370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.331 ms 00:19:18.720 [2024-11-20 09:31:44.127377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.720 [2024-11-20 09:31:44.127725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.720 [2024-11-20 09:31:44.127735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:18.720 [2024-11-20 09:31:44.127743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:18.720 [2024-11-20 09:31:44.127750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.720 [2024-11-20 09:31:44.160322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.720 [2024-11-20 09:31:44.160474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.720 [2024-11-20 09:31:44.160489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.720 [2024-11-20 09:31:44.160497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.720 [2024-11-20 09:31:44.160559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.720 [2024-11-20 09:31:44.160567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.720 [2024-11-20 09:31:44.160576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.720 [2024-11-20 09:31:44.160584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.720 [2024-11-20 09:31:44.160643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.720 [2024-11-20 09:31:44.160652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.720 [2024-11-20 09:31:44.160660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.720 [2024-11-20 09:31:44.160668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.720 [2024-11-20 09:31:44.160682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.720 [2024-11-20 09:31:44.160689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.720 [2024-11-20 09:31:44.160697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.720 [2024-11-20 09:31:44.160704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.978 [2024-11-20 09:31:44.238570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.978 [2024-11-20 09:31:44.238612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.978 [2024-11-20 09:31:44.238623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.978 [2024-11-20 09:31:44.238631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.978 [2024-11-20 09:31:44.301946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.978 [2024-11-20 09:31:44.301990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.978 [2024-11-20 09:31:44.302001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.978 [2024-11-20 09:31:44.302008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.978 [2024-11-20 09:31:44.302081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.978 [2024-11-20 09:31:44.302092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.978 [2024-11-20 09:31:44.302100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.978 [2024-11-20 09:31:44.302107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-11-20 09:31:44.302138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.979 [2024-11-20 09:31:44.302147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.979 [2024-11-20 09:31:44.302155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.979 [2024-11-20 09:31:44.302162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-11-20 09:31:44.302249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.979 [2024-11-20 09:31:44.302262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.979 [2024-11-20 09:31:44.302270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.979 [2024-11-20 09:31:44.302277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-11-20 09:31:44.302321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.979 [2024-11-20 09:31:44.302330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:18.979 [2024-11-20 09:31:44.302338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.979 [2024-11-20 09:31:44.302346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-11-20 09:31:44.302379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.979 [2024-11-20 09:31:44.302387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.979 [2024-11-20 09:31:44.302397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.979 [2024-11-20 09:31:44.302404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-11-20 09:31:44.302443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.979 [2024-11-20 09:31:44.302452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.979 [2024-11-20 09:31:44.302460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.979 [2024-11-20 09:31:44.302468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-11-20 09:31:44.302580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.901 ms, result 0 00:19:19.912 00:19:19.912 00:19:19.912 09:31:45 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:19.912 [2024-11-20 09:31:45.352121] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:19.912 [2024-11-20 09:31:45.352228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74939 ] 00:19:20.171 [2024-11-20 09:31:45.515510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.171 [2024-11-20 09:31:45.617634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.429 [2024-11-20 09:31:45.872350] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.429 [2024-11-20 09:31:45.872415] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.687 [2024-11-20 09:31:46.029590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.029642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.687 [2024-11-20 09:31:46.029658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.687 [2024-11-20 09:31:46.029667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.029713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.029724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.687 [2024-11-20 09:31:46.029734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:20.687 [2024-11-20 09:31:46.029741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.029760] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.687 [2024-11-20 09:31:46.030461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.687 [2024-11-20 09:31:46.030478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.030485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.687 [2024-11-20 09:31:46.030494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:19:20.687 [2024-11-20 09:31:46.030501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.031563] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.687 [2024-11-20 09:31:46.043443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.043477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.687 [2024-11-20 09:31:46.043489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.881 ms 00:19:20.687 [2024-11-20 09:31:46.043497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.043548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.043557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.687 [2024-11-20 09:31:46.043565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:20.687 [2024-11-20 09:31:46.043572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.048171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.048200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.687 [2024-11-20 09:31:46.048209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:19:20.687 [2024-11-20 09:31:46.048216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.048281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.048289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.687 [2024-11-20 09:31:46.048297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:20.687 [2024-11-20 09:31:46.048322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.048367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.048377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.687 [2024-11-20 09:31:46.048385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.687 [2024-11-20 09:31:46.048393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.048412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.687 [2024-11-20 09:31:46.051607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.051633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.687 [2024-11-20 09:31:46.051642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:19:20.687 [2024-11-20 09:31:46.051651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.051677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.051685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.687 [2024-11-20 09:31:46.051693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:20.687 [2024-11-20 09:31:46.051701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.051719] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.687 [2024-11-20 09:31:46.051735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.687 [2024-11-20 09:31:46.051768] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.687 [2024-11-20 09:31:46.051785] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.687 [2024-11-20 09:31:46.051885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.687 [2024-11-20 09:31:46.051895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.687 [2024-11-20 09:31:46.051905] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.687 [2024-11-20 09:31:46.051914] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.687 [2024-11-20 09:31:46.051924] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.687 [2024-11-20 09:31:46.051932] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:20.687 [2024-11-20 09:31:46.051939] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.687 [2024-11-20 09:31:46.051946] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.687 [2024-11-20 09:31:46.051953] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.687 [2024-11-20 09:31:46.051963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.051970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.687 [2024-11-20 09:31:46.051978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:20.687 [2024-11-20 09:31:46.051985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.052066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.687 [2024-11-20 09:31:46.052074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.687 [2024-11-20 09:31:46.052081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:20.687 [2024-11-20 09:31:46.052087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.687 [2024-11-20 09:31:46.052186] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.687 [2024-11-20 09:31:46.052197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.687 [2024-11-20 09:31:46.052205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.687 [2024-11-20 09:31:46.052213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.687 [2024-11-20 09:31:46.052220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.687 [2024-11-20 09:31:46.052227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.687 [2024-11-20 09:31:46.052233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:20.687 [2024-11-20 09:31:46.052241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.687 [2024-11-20 09:31:46.052248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:20.687 [2024-11-20 09:31:46.052254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.687 [2024-11-20 09:31:46.052261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.687 [2024-11-20 09:31:46.052267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:20.687 [2024-11-20 09:31:46.052273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.688 [2024-11-20 09:31:46.052279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.688 [2024-11-20 09:31:46.052286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:20.688 [2024-11-20 09:31:46.052314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.688 [2024-11-20 09:31:46.052328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.688 [2024-11-20 09:31:46.052350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.688 [2024-11-20 09:31:46.052370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.688 [2024-11-20 09:31:46.052392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.688 [2024-11-20 09:31:46.052411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.688 [2024-11-20 09:31:46.052431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.688 [2024-11-20 09:31:46.052443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.688 [2024-11-20 09:31:46.052449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:20.688 [2024-11-20 09:31:46.052455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.688 [2024-11-20 09:31:46.052462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.688 [2024-11-20 09:31:46.052468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:20.688 [2024-11-20 09:31:46.052474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.688 [2024-11-20 09:31:46.052488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:20.688 [2024-11-20 09:31:46.052494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052500] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.688 [2024-11-20 09:31:46.052507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.688 [2024-11-20 09:31:46.052514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.688 [2024-11-20 09:31:46.052529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.688 [2024-11-20 09:31:46.052535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.688 [2024-11-20 09:31:46.052541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.688 [2024-11-20 09:31:46.052548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.688 [2024-11-20 09:31:46.052554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.688 [2024-11-20 09:31:46.052561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.688 [2024-11-20 09:31:46.052569] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.688 [2024-11-20 09:31:46.052578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:20.688 [2024-11-20 09:31:46.052593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:20.688 [2024-11-20 09:31:46.052600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:20.688 [2024-11-20 09:31:46.052607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:20.688 [2024-11-20 09:31:46.052613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:20.688 [2024-11-20 09:31:46.052620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:20.688 [2024-11-20 09:31:46.052627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:20.688 [2024-11-20 09:31:46.052634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:20.688 [2024-11-20 09:31:46.052640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:20.688 [2024-11-20 09:31:46.052647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:20.688 [2024-11-20 09:31:46.052682] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.688 [2024-11-20 09:31:46.052692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.688 [2024-11-20 09:31:46.052707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.688 [2024-11-20 09:31:46.052714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.688 [2024-11-20 09:31:46.052720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.688 [2024-11-20 09:31:46.052727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.052734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.688 [2024-11-20 09:31:46.052742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:19:20.688 [2024-11-20 09:31:46.052748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.078333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.078375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.688 [2024-11-20 09:31:46.078387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.534 ms 00:19:20.688 [2024-11-20 09:31:46.078395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.078488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.078496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.688 [2024-11-20 09:31:46.078504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:20.688 [2024-11-20 09:31:46.078511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.124937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.124983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.688 [2024-11-20 09:31:46.124996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.361 ms 00:19:20.688 [2024-11-20 09:31:46.125004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.125055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.125064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.688 [2024-11-20 09:31:46.125072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.688 [2024-11-20 09:31:46.125082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.125474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.125496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.688 [2024-11-20 09:31:46.125505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:20.688 [2024-11-20 09:31:46.125513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.125640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.125649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.688 [2024-11-20 09:31:46.125657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:20.688 [2024-11-20 09:31:46.125668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.688 [2024-11-20 09:31:46.138712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.688 [2024-11-20 09:31:46.138746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.688 [2024-11-20 09:31:46.138758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.026 ms 00:19:20.688 [2024-11-20 09:31:46.138765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.151486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:20.947 [2024-11-20 09:31:46.151640] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.947 [2024-11-20 09:31:46.151656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.151664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.947 [2024-11-20 09:31:46.151672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.793 ms 00:19:20.947 [2024-11-20 09:31:46.151680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.176158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.176203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.947 [2024-11-20 09:31:46.176217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.177 ms 00:19:20.947 [2024-11-20 09:31:46.176225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.187910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.187958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.947 [2024-11-20 09:31:46.187968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.646 ms 00:19:20.947 [2024-11-20 09:31:46.187975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.199325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.199356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.947 [2024-11-20 09:31:46.199366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.318 ms 00:19:20.947 [2024-11-20 09:31:46.199373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.199972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.199990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.947 [2024-11-20 09:31:46.199999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:19:20.947 [2024-11-20 09:31:46.200008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.254928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.255120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.947 [2024-11-20 09:31:46.255144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.903 ms 00:19:20.947 [2024-11-20 09:31:46.255152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.266749] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:20.947 [2024-11-20 09:31:46.268935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.268963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.947 [2024-11-20 09:31:46.268975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.747 ms 00:19:20.947 [2024-11-20 09:31:46.268982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.269069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.269080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.947 [2024-11-20 09:31:46.269089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:20.947 [2024-11-20 09:31:46.269098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.269162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.269172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.947 [2024-11-20 09:31:46.269186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:20.947 [2024-11-20 09:31:46.269193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.269211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.269219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.947 [2024-11-20 09:31:46.269227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.947 [2024-11-20 09:31:46.269235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.269261] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.947 [2024-11-20 09:31:46.269273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.269280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.947 [2024-11-20 09:31:46.269288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:20.947 [2024-11-20 09:31:46.269295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.291957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.292005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.947 [2024-11-20 09:31:46.292016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.625 ms 00:19:20.947 [2024-11-20 09:31:46.292027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.292094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.947 [2024-11-20 09:31:46.292104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.947 [2024-11-20 09:31:46.292112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:20.947 [2024-11-20 09:31:46.292119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.947 [2024-11-20 09:31:46.293105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.110 ms, result 0 00:19:22.319  [2024-11-20T09:31:48.706Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T09:31:49.638Z] Copying: 43/1024 [MB] (19 MBps) [2024-11-20T09:31:50.571Z] Copying: 57/1024 [MB] (14 MBps) [2024-11-20T09:31:51.504Z] Copying: 77/1024 [MB] (19 MBps) [2024-11-20T09:31:52.875Z] Copying: 114/1024 [MB] (37 MBps) [2024-11-20T09:31:53.806Z] Copying: 162/1024 [MB] (48 MBps) [2024-11-20T09:31:54.736Z] Copying: 210/1024 [MB] (47 MBps) [2024-11-20T09:31:55.668Z] Copying: 258/1024 [MB] (47 MBps) [2024-11-20T09:31:56.599Z] Copying: 302/1024 [MB] (44 MBps) [2024-11-20T09:31:57.533Z] Copying: 347/1024 [MB] (44 MBps) [2024-11-20T09:31:58.903Z] Copying: 391/1024 [MB] (44 MBps) [2024-11-20T09:31:59.833Z] Copying: 436/1024 [MB] (44 MBps) [2024-11-20T09:32:00.765Z] Copying: 483/1024 [MB] (46 MBps) [2024-11-20T09:32:01.697Z] Copying: 527/1024 [MB] (44 MBps) [2024-11-20T09:32:02.628Z] Copying: 571/1024 [MB] (43 MBps) [2024-11-20T09:32:03.560Z] Copying: 615/1024 [MB] (44 MBps) [2024-11-20T09:32:04.492Z] Copying: 660/1024 [MB] (45 MBps) [2024-11-20T09:32:05.865Z] Copying: 705/1024 [MB] (44 MBps) [2024-11-20T09:32:06.800Z] Copying: 752/1024 [MB] (46 MBps) [2024-11-20T09:32:07.733Z] Copying: 798/1024 [MB] (45 MBps) [2024-11-20T09:32:08.666Z] Copying: 847/1024 [MB] (49 MBps) [2024-11-20T09:32:09.600Z] Copying: 895/1024 [MB] (48 MBps) [2024-11-20T09:32:10.534Z] Copying: 944/1024 [MB] (48 MBps) [2024-11-20T09:32:11.479Z] Copying: 993/1024 [MB] (49 MBps) [2024-11-20T09:32:11.479Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-11-20 09:32:11.365200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.365263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:46.023 [2024-11-20 09:32:11.365279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:46.023 [2024-11-20 09:32:11.365287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.365324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.023 [2024-11-20 09:32:11.367943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.367973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:46.023 [2024-11-20 09:32:11.367989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:19:46.023 [2024-11-20 09:32:11.367997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.368216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.368226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:46.023 [2024-11-20 09:32:11.368234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:46.023 [2024-11-20 09:32:11.368241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.372899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.372922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:46.023 [2024-11-20 09:32:11.372932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:19:46.023 [2024-11-20 09:32:11.372940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.379089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.379117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:46.023 [2024-11-20 09:32:11.379127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:19:46.023 [2024-11-20 09:32:11.379136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.404760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.404793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:46.023 [2024-11-20 09:32:11.404804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.564 ms 00:19:46.023 [2024-11-20 09:32:11.404811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.418143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.418175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:46.023 [2024-11-20 09:32:11.418187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.312 ms 00:19:46.023 [2024-11-20 09:32:11.418196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.418341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.418367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:46.023 [2024-11-20 09:32:11.418375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:46.023 [2024-11-20 09:32:11.418382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.441103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.441133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:46.023 [2024-11-20 09:32:11.441142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.706 ms 00:19:46.023 [2024-11-20 09:32:11.441150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.023 [2024-11-20 09:32:11.463584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.023 [2024-11-20 09:32:11.463621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:46.023 [2024-11-20 09:32:11.463631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.417 ms 00:19:46.023 [2024-11-20 09:32:11.463638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.282 [2024-11-20 09:32:11.485859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.282 [2024-11-20 09:32:11.485995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:46.282 [2024-11-20 09:32:11.486011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.203 ms 00:19:46.282 [2024-11-20 09:32:11.486019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.282 [2024-11-20 09:32:11.508350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.282 [2024-11-20 09:32:11.508467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:46.282 [2024-11-20 09:32:11.508481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.289 ms 00:19:46.282 [2024-11-20 09:32:11.508488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.282 [2024-11-20 09:32:11.508506] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:46.282 [2024-11-20 09:32:11.508518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:46.282 [2024-11-20 09:32:11.508709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.508994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:46.283 [2024-11-20 09:32:11.509261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:46.283 [2024-11-20 09:32:11.509271] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b0e7245-2226-4a2b-958f-fecff4d7a024 00:19:46.283 [2024-11-20 09:32:11.509278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:46.283 [2024-11-20 09:32:11.509285] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:46.283 [2024-11-20 09:32:11.509292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:46.283 [2024-11-20 09:32:11.509320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:46.283 [2024-11-20 09:32:11.509328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:46.283 [2024-11-20 09:32:11.509335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:46.283 [2024-11-20 09:32:11.509348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:46.283 [2024-11-20 09:32:11.509354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:46.283 [2024-11-20 09:32:11.509360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:46.283 [2024-11-20 09:32:11.509367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.283 [2024-11-20 09:32:11.509375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:46.283 [2024-11-20 09:32:11.509383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:19:46.283 [2024-11-20 09:32:11.509390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.283 [2024-11-20 09:32:11.521468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.284 [2024-11-20 09:32:11.521496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:46.284 [2024-11-20 09:32:11.521507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.062 ms 00:19:46.284 [2024-11-20 09:32:11.521514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.521847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.284 [2024-11-20 09:32:11.521855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:46.284 [2024-11-20 09:32:11.521863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:46.284 [2024-11-20 09:32:11.521874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.554257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.554290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.284 [2024-11-20 09:32:11.554312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.554321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.554374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.554382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.284 [2024-11-20 09:32:11.554389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.554400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.554450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.554459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.284 [2024-11-20 09:32:11.554467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.554489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.554504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.554511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.284 [2024-11-20 09:32:11.554518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.554525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.630940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.630986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.284 [2024-11-20 09:32:11.630997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.631005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.284 [2024-11-20 09:32:11.694200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.284 [2024-11-20 09:32:11.694292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.284 [2024-11-20 09:32:11.694383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.284 [2024-11-20 09:32:11.694503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:46.284 [2024-11-20 09:32:11.694554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.284 [2024-11-20 09:32:11.694612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.284 [2024-11-20 09:32:11.694665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.284 [2024-11-20 09:32:11.694673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.284 [2024-11-20 09:32:11.694680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.284 [2024-11-20 09:32:11.694785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.560 ms, result 0 00:19:47.216 00:19:47.216 00:19:47.216 09:32:12 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:49.113 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:49.114 09:32:14 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:49.371 [2024-11-20 09:32:14.602221] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:49.372 [2024-11-20 09:32:14.602364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75247 ] 00:19:49.372 [2024-11-20 09:32:14.761852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.630 [2024-11-20 09:32:14.861718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.889 [2024-11-20 09:32:15.116171] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.889 [2024-11-20 09:32:15.116237] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.889 [2024-11-20 09:32:15.268796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.268844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.889 [2024-11-20 09:32:15.268861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.889 [2024-11-20 09:32:15.268869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.268913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.268923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.889 [2024-11-20 09:32:15.268934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:49.889 [2024-11-20 09:32:15.268942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.268960] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.889 [2024-11-20 09:32:15.270022] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.889 [2024-11-20 09:32:15.270066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.270077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.889 [2024-11-20 09:32:15.270087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:19:49.889 [2024-11-20 09:32:15.270095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.271276] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.889 [2024-11-20 09:32:15.283278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.283441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.889 [2024-11-20 09:32:15.283460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.003 ms 00:19:49.889 [2024-11-20 09:32:15.283468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.283523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.283537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.889 [2024-11-20 09:32:15.283550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:49.889 [2024-11-20 09:32:15.283559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.288803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.288835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.889 [2024-11-20 09:32:15.288845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:19:49.889 [2024-11-20 09:32:15.288853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.288923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.288932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.889 [2024-11-20 09:32:15.288940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:49.889 [2024-11-20 09:32:15.288947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.288988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.889 [2024-11-20 09:32:15.288997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.889 [2024-11-20 09:32:15.289005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:49.889 [2024-11-20 09:32:15.289012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.889 [2024-11-20 09:32:15.289033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:49.889 [2024-11-20 09:32:15.292408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.890 [2024-11-20 09:32:15.292435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.890 [2024-11-20 09:32:15.292444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:19:49.890 [2024-11-20 09:32:15.292455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.890 [2024-11-20 09:32:15.292484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.890 [2024-11-20 09:32:15.292492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.890 [2024-11-20 09:32:15.292500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:49.890 [2024-11-20 09:32:15.292507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.890 [2024-11-20 09:32:15.292527] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.890 [2024-11-20 09:32:15.292544] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:49.890 [2024-11-20 09:32:15.292577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.890 [2024-11-20 09:32:15.292594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:49.890 [2024-11-20 09:32:15.292696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.890 [2024-11-20 09:32:15.292706] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.890 [2024-11-20 09:32:15.292717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:49.890 [2024-11-20 09:32:15.292726] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.890 [2024-11-20 09:32:15.292735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.890 [2024-11-20 09:32:15.292743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:49.890 [2024-11-20 09:32:15.292751] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.890 [2024-11-20 09:32:15.292758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.890 [2024-11-20 09:32:15.292765] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.890 [2024-11-20 09:32:15.292775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.890 [2024-11-20 09:32:15.292782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.890 [2024-11-20 09:32:15.292790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:19:49.890 [2024-11-20 09:32:15.292797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.890 [2024-11-20 09:32:15.292881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.890 [2024-11-20 09:32:15.292889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.890 [2024-11-20 09:32:15.292896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:49.890 [2024-11-20 09:32:15.292903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.890 [2024-11-20 09:32:15.293017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.890 [2024-11-20 09:32:15.293030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.890 [2024-11-20 09:32:15.293037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.890 [2024-11-20 09:32:15.293059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.890 [2024-11-20 09:32:15.293080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.890 [2024-11-20 09:32:15.293094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.890 [2024-11-20 09:32:15.293100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:49.890 [2024-11-20 09:32:15.293106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.890 [2024-11-20 09:32:15.293113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.890 [2024-11-20 09:32:15.293120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:49.890 [2024-11-20 09:32:15.293132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.890 [2024-11-20 09:32:15.293145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.890 [2024-11-20 09:32:15.293165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.890 [2024-11-20 09:32:15.293184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.890 [2024-11-20 09:32:15.293203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.890 [2024-11-20 09:32:15.293223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.890 [2024-11-20 09:32:15.293242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.890 [2024-11-20 09:32:15.293254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.890 [2024-11-20 09:32:15.293261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:49.890 [2024-11-20 09:32:15.293267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.890 [2024-11-20 09:32:15.293274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.890 [2024-11-20 09:32:15.293280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:49.890 [2024-11-20 09:32:15.293286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.890 [2024-11-20 09:32:15.293316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:49.890 [2024-11-20 09:32:15.293324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293331] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.890 [2024-11-20 09:32:15.293338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.890 [2024-11-20 09:32:15.293345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.890 [2024-11-20 09:32:15.293360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.890 [2024-11-20 09:32:15.293367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.890 [2024-11-20 09:32:15.293374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.890 [2024-11-20 09:32:15.293381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.890 [2024-11-20 09:32:15.293388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.890 [2024-11-20 09:32:15.293400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.890 [2024-11-20 09:32:15.293409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.890 [2024-11-20 09:32:15.293418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.890 [2024-11-20 09:32:15.293426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:49.890 [2024-11-20 09:32:15.293433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:49.890 [2024-11-20 09:32:15.293441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:49.890 [2024-11-20 09:32:15.293448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:49.890 [2024-11-20 09:32:15.293454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:49.890 [2024-11-20 09:32:15.293461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:49.890 [2024-11-20 09:32:15.293468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:49.890 [2024-11-20 09:32:15.293475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:49.890 [2024-11-20 09:32:15.293482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:49.890 [2024-11-20 09:32:15.293488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:49.890 [2024-11-20 09:32:15.293495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:49.890 [2024-11-20 09:32:15.293502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:49.890 [2024-11-20 09:32:15.293509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:49.891 [2024-11-20 09:32:15.293516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:49.891 [2024-11-20 09:32:15.293522] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.891 [2024-11-20 09:32:15.293532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.891 [2024-11-20 09:32:15.293540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.891 [2024-11-20 09:32:15.293547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.891 [2024-11-20 09:32:15.293555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.891 [2024-11-20 09:32:15.293561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.891 [2024-11-20 09:32:15.293568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.891 [2024-11-20 09:32:15.293575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.891 [2024-11-20 09:32:15.293586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:19:49.891 [2024-11-20 09:32:15.293598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.891 [2024-11-20 09:32:15.319795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.891 [2024-11-20 09:32:15.319837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.891 [2024-11-20 09:32:15.319849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.137 ms 00:19:49.891 [2024-11-20 09:32:15.319857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.891 [2024-11-20 09:32:15.319948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.891 [2024-11-20 09:32:15.319957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.891 [2024-11-20 09:32:15.319965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:49.891 [2024-11-20 09:32:15.319972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.149 [2024-11-20 09:32:15.362514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.362562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.150 [2024-11-20 09:32:15.362574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.485 ms 00:19:50.150 [2024-11-20 09:32:15.362583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.362634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.362644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.150 [2024-11-20 09:32:15.362652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.150 [2024-11-20 09:32:15.362663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.363037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.363054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.150 [2024-11-20 09:32:15.363063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:50.150 [2024-11-20 09:32:15.363070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.363193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.363202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.150 [2024-11-20 09:32:15.363210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:50.150 [2024-11-20 09:32:15.363222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.376336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.376369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.150 [2024-11-20 09:32:15.376382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.097 ms 00:19:50.150 [2024-11-20 09:32:15.376389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.388586] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:50.150 [2024-11-20 09:32:15.388621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.150 [2024-11-20 09:32:15.388633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.388642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.150 [2024-11-20 09:32:15.388653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.146 ms 00:19:50.150 [2024-11-20 09:32:15.388662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.412733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.412786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.150 [2024-11-20 09:32:15.412798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.030 ms 00:19:50.150 [2024-11-20 09:32:15.412807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.424045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.424076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.150 [2024-11-20 09:32:15.424086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.199 ms 00:19:50.150 [2024-11-20 09:32:15.424093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.435117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.435149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.150 [2024-11-20 09:32:15.435159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.993 ms 00:19:50.150 [2024-11-20 09:32:15.435166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.435795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.435820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.150 [2024-11-20 09:32:15.435830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:19:50.150 [2024-11-20 09:32:15.435840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.489285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.489350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.150 [2024-11-20 09:32:15.489369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.426 ms 00:19:50.150 [2024-11-20 09:32:15.489378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.499722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:50.150 [2024-11-20 09:32:15.502104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.502132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.150 [2024-11-20 09:32:15.502144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.679 ms 00:19:50.150 [2024-11-20 09:32:15.502153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.502244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.502254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.150 [2024-11-20 09:32:15.502263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:50.150 [2024-11-20 09:32:15.502273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.502355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.502367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.150 [2024-11-20 09:32:15.502375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:50.150 [2024-11-20 09:32:15.502383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.502401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.502409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.150 [2024-11-20 09:32:15.502416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.150 [2024-11-20 09:32:15.502424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.502453] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.150 [2024-11-20 09:32:15.502465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.502481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.150 [2024-11-20 09:32:15.502489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:50.150 [2024-11-20 09:32:15.502496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.525552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.525586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.150 [2024-11-20 09:32:15.525597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:19:50.150 [2024-11-20 09:32:15.525610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.525679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.150 [2024-11-20 09:32:15.525688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.150 [2024-11-20 09:32:15.525696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:50.150 [2024-11-20 09:32:15.525703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.150 [2024-11-20 09:32:15.527141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.930 ms, result 0 00:19:51.525  [2024-11-20T09:32:17.547Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-20T09:32:18.920Z] Copying: 93/1024 [MB] (47 MBps) [2024-11-20T09:32:19.852Z] Copying: 143/1024 [MB] (50 MBps) [2024-11-20T09:32:20.785Z] Copying: 196/1024 [MB] (52 MBps) [2024-11-20T09:32:21.717Z] Copying: 247/1024 [MB] (50 MBps) [2024-11-20T09:32:22.648Z] Copying: 300/1024 [MB] (53 MBps) [2024-11-20T09:32:23.580Z] Copying: 347/1024 [MB] (47 MBps) [2024-11-20T09:32:24.951Z] Copying: 396/1024 [MB] (48 MBps) [2024-11-20T09:32:25.885Z] Copying: 441/1024 [MB] (45 MBps) [2024-11-20T09:32:26.817Z] Copying: 487/1024 [MB] (45 MBps) [2024-11-20T09:32:27.747Z] Copying: 533/1024 [MB] (46 MBps) [2024-11-20T09:32:28.681Z] Copying: 581/1024 [MB] (47 MBps) [2024-11-20T09:32:29.612Z] Copying: 626/1024 [MB] (45 MBps) [2024-11-20T09:32:30.546Z] Copying: 672/1024 [MB] (45 MBps) [2024-11-20T09:32:31.939Z] Copying: 718/1024 [MB] (46 MBps) [2024-11-20T09:32:32.873Z] Copying: 764/1024 [MB] (45 MBps) [2024-11-20T09:32:33.830Z] Copying: 809/1024 [MB] (45 MBps) [2024-11-20T09:32:34.763Z] Copying: 856/1024 [MB] (47 MBps) [2024-11-20T09:32:35.700Z] Copying: 901/1024 [MB] (45 MBps) [2024-11-20T09:32:36.631Z] Copying: 947/1024 [MB] (46 MBps) [2024-11-20T09:32:37.590Z] Copying: 994/1024 [MB] (46 MBps) [2024-11-20T09:32:38.523Z] Copying: 1023/1024 [MB] (28 MBps) [2024-11-20T09:32:38.523Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-20 09:32:38.228411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.067 [2024-11-20 09:32:38.228470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.067 [2024-11-20 09:32:38.228485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:13.067 [2024-11-20 09:32:38.228503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.067 [2024-11-20 09:32:38.230274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.067 [2024-11-20 09:32:38.234877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.067 [2024-11-20 09:32:38.234913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.067 [2024-11-20 09:32:38.234925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.557 ms 00:20:13.067 [2024-11-20 09:32:38.234934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.067 [2024-11-20 09:32:38.247406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.067 [2024-11-20 09:32:38.247568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.067 [2024-11-20 09:32:38.247598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.337 ms 00:20:13.067 [2024-11-20 09:32:38.247608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.067 [2024-11-20 09:32:38.265188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.067 [2024-11-20 09:32:38.265231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.067 [2024-11-20 09:32:38.265242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.558 ms 00:20:13.068 [2024-11-20 09:32:38.265250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.271417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.271446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.068 [2024-11-20 09:32:38.271456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:20:13.068 [2024-11-20 09:32:38.271465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.295333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.295380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.068 [2024-11-20 09:32:38.295392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.815 ms 00:20:13.068 [2024-11-20 09:32:38.295400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.309374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.309418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.068 [2024-11-20 09:32:38.309430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.928 ms 00:20:13.068 [2024-11-20 09:32:38.309438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.357855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.357923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.068 [2024-11-20 09:32:38.357936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.371 ms 00:20:13.068 [2024-11-20 09:32:38.357944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.382389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.382635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.068 [2024-11-20 09:32:38.382654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.428 ms 00:20:13.068 [2024-11-20 09:32:38.382662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.405615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.405771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.068 [2024-11-20 09:32:38.405788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.917 ms 00:20:13.068 [2024-11-20 09:32:38.405795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.428098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.428137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.068 [2024-11-20 09:32:38.428150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.269 ms 00:20:13.068 [2024-11-20 09:32:38.428158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.450593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.068 [2024-11-20 09:32:38.450629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.068 [2024-11-20 09:32:38.450641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.371 ms 00:20:13.068 [2024-11-20 09:32:38.450648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.068 [2024-11-20 09:32:38.450681] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.068 [2024-11-20 09:32:38.450695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118784 / 261120 wr_cnt: 1 state: open 00:20:13.068 [2024-11-20 09:32:38.450705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.450995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.068 [2024-11-20 09:32:38.451152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.069 [2024-11-20 09:32:38.451481] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.069 [2024-11-20 09:32:38.451488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b0e7245-2226-4a2b-958f-fecff4d7a024 00:20:13.069 [2024-11-20 09:32:38.451496] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118784 00:20:13.069 [2024-11-20 09:32:38.451503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119744 00:20:13.069 [2024-11-20 09:32:38.451510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118784 00:20:13.069 [2024-11-20 09:32:38.451518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:20:13.069 [2024-11-20 09:32:38.451524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.069 [2024-11-20 09:32:38.451536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.069 [2024-11-20 09:32:38.451549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.069 [2024-11-20 09:32:38.451556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.069 [2024-11-20 09:32:38.451562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.069 [2024-11-20 09:32:38.451569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.069 [2024-11-20 09:32:38.451576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.069 [2024-11-20 09:32:38.451584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:20:13.069 [2024-11-20 09:32:38.451591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.069 [2024-11-20 09:32:38.464045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.069 [2024-11-20 09:32:38.464077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.069 [2024-11-20 09:32:38.464087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.437 ms 00:20:13.069 [2024-11-20 09:32:38.464100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.069 [2024-11-20 09:32:38.464476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.069 [2024-11-20 09:32:38.464525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.069 [2024-11-20 09:32:38.464535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:20:13.069 [2024-11-20 09:32:38.464542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.069 [2024-11-20 09:32:38.496977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.069 [2024-11-20 09:32:38.497026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.069 [2024-11-20 09:32:38.497042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.069 [2024-11-20 09:32:38.497050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.069 [2024-11-20 09:32:38.497111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.069 [2024-11-20 09:32:38.497119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.069 [2024-11-20 09:32:38.497127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.069 [2024-11-20 09:32:38.497134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.069 [2024-11-20 09:32:38.497191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.069 [2024-11-20 09:32:38.497200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.069 [2024-11-20 09:32:38.497207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.069 [2024-11-20 09:32:38.497217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.069 [2024-11-20 09:32:38.497232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.069 [2024-11-20 09:32:38.497239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.069 [2024-11-20 09:32:38.497246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.069 [2024-11-20 09:32:38.497253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.574985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.328 [2024-11-20 09:32:38.575039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.328 [2024-11-20 09:32:38.575055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.328 [2024-11-20 09:32:38.575062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.638915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.328 [2024-11-20 09:32:38.639106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.328 [2024-11-20 09:32:38.639122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.328 [2024-11-20 09:32:38.639129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.639204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.328 [2024-11-20 09:32:38.639212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.328 [2024-11-20 09:32:38.639220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.328 [2024-11-20 09:32:38.639228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.639265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.328 [2024-11-20 09:32:38.639274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.328 [2024-11-20 09:32:38.639281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.328 [2024-11-20 09:32:38.639289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.639407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.328 [2024-11-20 09:32:38.639417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.328 [2024-11-20 09:32:38.639425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.328 [2024-11-20 09:32:38.639432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.639465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.328 [2024-11-20 09:32:38.639473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.328 [2024-11-20 09:32:38.639482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.328 [2024-11-20 09:32:38.639489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.328 [2024-11-20 09:32:38.639520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.329 [2024-11-20 09:32:38.639528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.329 [2024-11-20 09:32:38.639536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.329 [2024-11-20 09:32:38.639543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.329 [2024-11-20 09:32:38.639585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.329 [2024-11-20 09:32:38.639594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.329 [2024-11-20 09:32:38.639602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.329 [2024-11-20 09:32:38.639610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.329 [2024-11-20 09:32:38.639717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.448 ms, result 0 00:20:14.722 00:20:14.722 00:20:14.722 09:32:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:14.722 [2024-11-20 09:32:39.972809] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:14.722 [2024-11-20 09:32:39.972934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75509 ] 00:20:14.722 [2024-11-20 09:32:40.130325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.979 [2024-11-20 09:32:40.230260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.237 [2024-11-20 09:32:40.483228] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.237 [2024-11-20 09:32:40.483291] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.237 [2024-11-20 09:32:40.636770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.636827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.237 [2024-11-20 09:32:40.636845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.237 [2024-11-20 09:32:40.636852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.636899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.636909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.237 [2024-11-20 09:32:40.636919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:15.237 [2024-11-20 09:32:40.636927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.636946] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.237 [2024-11-20 09:32:40.637687] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.237 [2024-11-20 09:32:40.637708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.637716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.237 [2024-11-20 09:32:40.637724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:20:15.237 [2024-11-20 09:32:40.637732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.638794] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:15.237 [2024-11-20 09:32:40.651007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.651040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:15.237 [2024-11-20 09:32:40.651051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.214 ms 00:20:15.237 [2024-11-20 09:32:40.651059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.651117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.651127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:15.237 [2024-11-20 09:32:40.651135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:15.237 [2024-11-20 09:32:40.651142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.655965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.655997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.237 [2024-11-20 09:32:40.656007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:20:15.237 [2024-11-20 09:32:40.656014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.656085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.656093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.237 [2024-11-20 09:32:40.656101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:15.237 [2024-11-20 09:32:40.656108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.656150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.656159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.237 [2024-11-20 09:32:40.656167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:15.237 [2024-11-20 09:32:40.656174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.656195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.237 [2024-11-20 09:32:40.659486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.659513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.237 [2024-11-20 09:32:40.659523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:20:15.237 [2024-11-20 09:32:40.659532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.659560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.659567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.237 [2024-11-20 09:32:40.659575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:15.237 [2024-11-20 09:32:40.659582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.659601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:15.237 [2024-11-20 09:32:40.659618] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:15.237 [2024-11-20 09:32:40.659651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:15.237 [2024-11-20 09:32:40.659668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:15.237 [2024-11-20 09:32:40.659768] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:15.237 [2024-11-20 09:32:40.659779] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.237 [2024-11-20 09:32:40.659789] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:15.237 [2024-11-20 09:32:40.659799] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.237 [2024-11-20 09:32:40.659807] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.237 [2024-11-20 09:32:40.659816] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:15.237 [2024-11-20 09:32:40.659823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.237 [2024-11-20 09:32:40.659830] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:15.237 [2024-11-20 09:32:40.659837] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:15.237 [2024-11-20 09:32:40.659847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.659854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.237 [2024-11-20 09:32:40.659862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:20:15.237 [2024-11-20 09:32:40.659869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.659951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.237 [2024-11-20 09:32:40.659958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.237 [2024-11-20 09:32:40.659966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:15.237 [2024-11-20 09:32:40.659972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.237 [2024-11-20 09:32:40.660073] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.237 [2024-11-20 09:32:40.660084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.237 [2024-11-20 09:32:40.660092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.237 [2024-11-20 09:32:40.660099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.237 [2024-11-20 09:32:40.660106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.237 [2024-11-20 09:32:40.660113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.237 [2024-11-20 09:32:40.660120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:15.237 [2024-11-20 09:32:40.660127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.237 [2024-11-20 09:32:40.660134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:15.237 [2024-11-20 09:32:40.660141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.237 [2024-11-20 09:32:40.660148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.237 [2024-11-20 09:32:40.660154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:15.238 [2024-11-20 09:32:40.660160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.238 [2024-11-20 09:32:40.660167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.238 [2024-11-20 09:32:40.660173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:15.238 [2024-11-20 09:32:40.660185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.238 [2024-11-20 09:32:40.660199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.238 [2024-11-20 09:32:40.660218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.238 [2024-11-20 09:32:40.660237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.238 [2024-11-20 09:32:40.660256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.238 [2024-11-20 09:32:40.660275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.238 [2024-11-20 09:32:40.660295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.238 [2024-11-20 09:32:40.660326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.238 [2024-11-20 09:32:40.660333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:15.238 [2024-11-20 09:32:40.660340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.238 [2024-11-20 09:32:40.660346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:15.238 [2024-11-20 09:32:40.660353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:15.238 [2024-11-20 09:32:40.660359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:15.238 [2024-11-20 09:32:40.660373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:15.238 [2024-11-20 09:32:40.660380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660386] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.238 [2024-11-20 09:32:40.660394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.238 [2024-11-20 09:32:40.660401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.238 [2024-11-20 09:32:40.660421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.238 [2024-11-20 09:32:40.660427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.238 [2024-11-20 09:32:40.660435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.238 [2024-11-20 09:32:40.660442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.238 [2024-11-20 09:32:40.660448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.238 [2024-11-20 09:32:40.660454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.238 [2024-11-20 09:32:40.660462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.238 [2024-11-20 09:32:40.660470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:15.238 [2024-11-20 09:32:40.660486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:15.238 [2024-11-20 09:32:40.660492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:15.238 [2024-11-20 09:32:40.660500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:15.238 [2024-11-20 09:32:40.660507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:15.238 [2024-11-20 09:32:40.660513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:15.238 [2024-11-20 09:32:40.660520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:15.238 [2024-11-20 09:32:40.660527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:15.238 [2024-11-20 09:32:40.660534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:15.238 [2024-11-20 09:32:40.660541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:15.238 [2024-11-20 09:32:40.660576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.238 [2024-11-20 09:32:40.660586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.238 [2024-11-20 09:32:40.660601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.238 [2024-11-20 09:32:40.660607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.238 [2024-11-20 09:32:40.660614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.238 [2024-11-20 09:32:40.660621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-11-20 09:32:40.660628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.238 [2024-11-20 09:32:40.660635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:20:15.238 [2024-11-20 09:32:40.660642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-11-20 09:32:40.686272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-11-20 09:32:40.686322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.238 [2024-11-20 09:32:40.686333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.576 ms 00:20:15.238 [2024-11-20 09:32:40.686341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.238 [2024-11-20 09:32:40.686441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.238 [2024-11-20 09:32:40.686449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.238 [2024-11-20 09:32:40.686457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:15.238 [2024-11-20 09:32:40.686464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.496 [2024-11-20 09:32:40.729170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.496 [2024-11-20 09:32:40.729225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.496 [2024-11-20 09:32:40.729239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.654 ms 00:20:15.496 [2024-11-20 09:32:40.729247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.496 [2024-11-20 09:32:40.729316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.496 [2024-11-20 09:32:40.729327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.496 [2024-11-20 09:32:40.729336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:15.496 [2024-11-20 09:32:40.729347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.496 [2024-11-20 09:32:40.729732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.496 [2024-11-20 09:32:40.729754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.496 [2024-11-20 09:32:40.729764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:15.497 [2024-11-20 09:32:40.729772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.729899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.729908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.497 [2024-11-20 09:32:40.729916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:15.497 [2024-11-20 09:32:40.729927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.742975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.743008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.497 [2024-11-20 09:32:40.743020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.031 ms 00:20:15.497 [2024-11-20 09:32:40.743027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.755335] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:15.497 [2024-11-20 09:32:40.755479] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:15.497 [2024-11-20 09:32:40.755495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.755502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:15.497 [2024-11-20 09:32:40.755512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.370 ms 00:20:15.497 [2024-11-20 09:32:40.755518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.779821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.779861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:15.497 [2024-11-20 09:32:40.779872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.268 ms 00:20:15.497 [2024-11-20 09:32:40.779880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.791147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.791283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:15.497 [2024-11-20 09:32:40.791314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.245 ms 00:20:15.497 [2024-11-20 09:32:40.791322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.802397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.802538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:15.497 [2024-11-20 09:32:40.802554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.046 ms 00:20:15.497 [2024-11-20 09:32:40.802563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.803172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.803194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.497 [2024-11-20 09:32:40.803203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:15.497 [2024-11-20 09:32:40.803213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.858217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.858441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:15.497 [2024-11-20 09:32:40.858465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.986 ms 00:20:15.497 [2024-11-20 09:32:40.858473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.869241] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:15.497 [2024-11-20 09:32:40.871913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.871946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.497 [2024-11-20 09:32:40.871959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.395 ms 00:20:15.497 [2024-11-20 09:32:40.871968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.872070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.872081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:15.497 [2024-11-20 09:32:40.872089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.497 [2024-11-20 09:32:40.872099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.873495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.873527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.497 [2024-11-20 09:32:40.873538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:20:15.497 [2024-11-20 09:32:40.873547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.873573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.873582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.497 [2024-11-20 09:32:40.873591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:15.497 [2024-11-20 09:32:40.873599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.873634] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:15.497 [2024-11-20 09:32:40.873647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.873655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:15.497 [2024-11-20 09:32:40.873665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:15.497 [2024-11-20 09:32:40.873673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.897150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.897193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.497 [2024-11-20 09:32:40.897205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.459 ms 00:20:15.497 [2024-11-20 09:32:40.897217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.897293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.497 [2024-11-20 09:32:40.897319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.497 [2024-11-20 09:32:40.897328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:15.497 [2024-11-20 09:32:40.897336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.497 [2024-11-20 09:32:40.899927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.050 ms, result 0 00:20:16.877  [2024-11-20T09:32:43.276Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-20T09:32:44.210Z] Copying: 92/1024 [MB] (48 MBps) [2024-11-20T09:32:45.159Z] Copying: 140/1024 [MB] (48 MBps) [2024-11-20T09:32:46.090Z] Copying: 190/1024 [MB] (49 MBps) [2024-11-20T09:32:47.473Z] Copying: 239/1024 [MB] (49 MBps) [2024-11-20T09:32:48.477Z] Copying: 287/1024 [MB] (48 MBps) [2024-11-20T09:32:49.440Z] Copying: 336/1024 [MB] (48 MBps) [2024-11-20T09:32:50.371Z] Copying: 383/1024 [MB] (46 MBps) [2024-11-20T09:32:51.302Z] Copying: 431/1024 [MB] (47 MBps) [2024-11-20T09:32:52.263Z] Copying: 477/1024 [MB] (46 MBps) [2024-11-20T09:32:53.197Z] Copying: 524/1024 [MB] (47 MBps) [2024-11-20T09:32:54.129Z] Copying: 573/1024 [MB] (48 MBps) [2024-11-20T09:32:55.503Z] Copying: 623/1024 [MB] (50 MBps) [2024-11-20T09:32:56.436Z] Copying: 668/1024 [MB] (45 MBps) [2024-11-20T09:32:57.369Z] Copying: 713/1024 [MB] (44 MBps) [2024-11-20T09:32:58.376Z] Copying: 761/1024 [MB] (48 MBps) [2024-11-20T09:32:59.308Z] Copying: 808/1024 [MB] (46 MBps) [2024-11-20T09:33:00.241Z] Copying: 856/1024 [MB] (47 MBps) [2024-11-20T09:33:01.174Z] Copying: 901/1024 [MB] (45 MBps) [2024-11-20T09:33:02.104Z] Copying: 947/1024 [MB] (45 MBps) [2024-11-20T09:33:03.037Z] Copying: 993/1024 [MB] (46 MBps) [2024-11-20T09:33:03.037Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 09:33:02.935907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.581 [2024-11-20 09:33:02.936076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.581 [2024-11-20 09:33:02.936143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.581 [2024-11-20 09:33:02.936167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.581 [2024-11-20 09:33:02.936214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.581 [2024-11-20 09:33:02.938836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.581 [2024-11-20 09:33:02.938936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.581 [2024-11-20 09:33:02.939016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:20:37.581 [2024-11-20 09:33:02.939039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.581 [2024-11-20 09:33:02.939269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.581 [2024-11-20 09:33:02.939311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.581 [2024-11-20 09:33:02.939333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:20:37.581 [2024-11-20 09:33:02.939391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.581 [2024-11-20 09:33:02.943646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.581 [2024-11-20 09:33:02.943749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.581 [2024-11-20 09:33:02.945455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:20:37.581 [2024-11-20 09:33:02.945559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.581 [2024-11-20 09:33:02.952780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.581 [2024-11-20 09:33:02.952883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.581 [2024-11-20 09:33:02.952939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.158 ms 00:20:37.581 [2024-11-20 09:33:02.952963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.581 [2024-11-20 09:33:02.979071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.581 [2024-11-20 09:33:02.979194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.581 [2024-11-20 09:33:02.979275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.023 ms 00:20:37.582 [2024-11-20 09:33:02.979313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.582 [2024-11-20 09:33:02.999193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.582 [2024-11-20 09:33:02.999251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.582 [2024-11-20 09:33:02.999270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.785 ms 00:20:37.582 [2024-11-20 09:33:02.999283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.937 [2024-11-20 09:33:03.061236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.937 [2024-11-20 09:33:03.061332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.937 [2024-11-20 09:33:03.061351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.878 ms 00:20:37.937 [2024-11-20 09:33:03.061364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.937 [2024-11-20 09:33:03.095440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.937 [2024-11-20 09:33:03.095484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.937 [2024-11-20 09:33:03.095496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.054 ms 00:20:37.937 [2024-11-20 09:33:03.095504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.937 [2024-11-20 09:33:03.118100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.937 [2024-11-20 09:33:03.118222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.937 [2024-11-20 09:33:03.118246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.560 ms 00:20:37.937 [2024-11-20 09:33:03.118253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.937 [2024-11-20 09:33:03.140314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.937 [2024-11-20 09:33:03.140421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.937 [2024-11-20 09:33:03.140435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.031 ms 00:20:37.937 [2024-11-20 09:33:03.140442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.937 [2024-11-20 09:33:03.162136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.937 [2024-11-20 09:33:03.162166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.937 [2024-11-20 09:33:03.162176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.628 ms 00:20:37.937 [2024-11-20 09:33:03.162184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.937 [2024-11-20 09:33:03.162212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.937 [2024-11-20 09:33:03.162225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:20:37.937 [2024-11-20 09:33:03.162235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.937 [2024-11-20 09:33:03.162341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.162993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.938 [2024-11-20 09:33:03.163008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.938 [2024-11-20 09:33:03.163015] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b0e7245-2226-4a2b-958f-fecff4d7a024 00:20:37.938 [2024-11-20 09:33:03.163023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:20:37.938 [2024-11-20 09:33:03.163030] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13248 00:20:37.939 [2024-11-20 09:33:03.163037] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12288 00:20:37.939 [2024-11-20 09:33:03.163045] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0781 00:20:37.939 [2024-11-20 09:33:03.163052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.939 [2024-11-20 09:33:03.163063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.939 [2024-11-20 09:33:03.163070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.939 [2024-11-20 09:33:03.163081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.939 [2024-11-20 09:33:03.163090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.939 [2024-11-20 09:33:03.163097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.939 [2024-11-20 09:33:03.163104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.939 [2024-11-20 09:33:03.163113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:20:37.939 [2024-11-20 09:33:03.163120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.175108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.939 [2024-11-20 09:33:03.175135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.939 [2024-11-20 09:33:03.175144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.974 ms 00:20:37.939 [2024-11-20 09:33:03.175155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.175518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.939 [2024-11-20 09:33:03.175559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.939 [2024-11-20 09:33:03.175570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:20:37.939 [2024-11-20 09:33:03.175577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.209572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.939 [2024-11-20 09:33:03.209612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.939 [2024-11-20 09:33:03.209627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.939 [2024-11-20 09:33:03.209636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.209691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.939 [2024-11-20 09:33:03.209699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.939 [2024-11-20 09:33:03.209707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.939 [2024-11-20 09:33:03.209714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.209767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.939 [2024-11-20 09:33:03.209777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.939 [2024-11-20 09:33:03.209785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.939 [2024-11-20 09:33:03.209795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.209810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.939 [2024-11-20 09:33:03.209817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.939 [2024-11-20 09:33:03.209824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.939 [2024-11-20 09:33:03.209831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.939 [2024-11-20 09:33:03.292720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.939 [2024-11-20 09:33:03.292776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.939 [2024-11-20 09:33:03.292793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.939 [2024-11-20 09:33:03.292801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.359889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.359949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.210 [2024-11-20 09:33:03.359961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.359969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.360047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.210 [2024-11-20 09:33:03.360055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.360062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.360107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.210 [2024-11-20 09:33:03.360115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.360122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.360219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.210 [2024-11-20 09:33:03.360227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.360234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.360271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.210 [2024-11-20 09:33:03.360279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.360286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.360352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.210 [2024-11-20 09:33:03.360359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.360366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.210 [2024-11-20 09:33:03.360429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.210 [2024-11-20 09:33:03.360437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.210 [2024-11-20 09:33:03.360444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.210 [2024-11-20 09:33:03.360551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 424.619 ms, result 0 00:20:38.776 00:20:38.776 00:20:38.776 09:33:04 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.304 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.304 Process with pid 74468 is not found 00:20:41.304 Remove shared memory files 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74468 00:20:41.304 09:33:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74468 ']' 00:20:41.304 09:33:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74468 00:20:41.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74468) - No such process 00:20:41.304 09:33:06 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74468 is not found' 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:41.304 09:33:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:41.304 ************************************ 00:20:41.304 END TEST ftl_restore 00:20:41.304 ************************************ 00:20:41.304 00:20:41.304 real 2m4.874s 00:20:41.304 user 1m55.079s 00:20:41.304 sys 0m11.538s 00:20:41.304 09:33:06 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.304 09:33:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:41.304 09:33:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:41.304 09:33:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:41.304 09:33:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.304 09:33:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:41.305 ************************************ 00:20:41.305 START TEST ftl_dirty_shutdown 00:20:41.305 ************************************ 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:41.305 * Looking for test storage... 00:20:41.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.305 --rc genhtml_branch_coverage=1 00:20:41.305 --rc genhtml_function_coverage=1 00:20:41.305 --rc genhtml_legend=1 00:20:41.305 --rc geninfo_all_blocks=1 00:20:41.305 --rc geninfo_unexecuted_blocks=1 00:20:41.305 00:20:41.305 ' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.305 --rc genhtml_branch_coverage=1 00:20:41.305 --rc genhtml_function_coverage=1 00:20:41.305 --rc genhtml_legend=1 00:20:41.305 --rc geninfo_all_blocks=1 00:20:41.305 --rc geninfo_unexecuted_blocks=1 00:20:41.305 00:20:41.305 ' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.305 --rc genhtml_branch_coverage=1 00:20:41.305 --rc genhtml_function_coverage=1 00:20:41.305 --rc genhtml_legend=1 00:20:41.305 --rc geninfo_all_blocks=1 00:20:41.305 --rc geninfo_unexecuted_blocks=1 00:20:41.305 00:20:41.305 ' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.305 --rc genhtml_branch_coverage=1 00:20:41.305 --rc genhtml_function_coverage=1 00:20:41.305 --rc genhtml_legend=1 00:20:41.305 --rc geninfo_all_blocks=1 00:20:41.305 --rc geninfo_unexecuted_blocks=1 00:20:41.305 00:20:41.305 ' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=75853 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75853 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 75853 ']' 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.305 09:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:41.305 [2024-11-20 09:33:06.596591] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:41.305 [2024-11-20 09:33:06.596838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75853 ] 00:20:41.563 [2024-11-20 09:33:06.757641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.563 [2024-11-20 09:33:06.855213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:42.129 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:20:42.393 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:42.651 { 00:20:42.651 "name": "nvme0n1", 00:20:42.651 "aliases": [ 00:20:42.651 "9575b614-4071-477a-a9a7-946432940f13" 00:20:42.651 ], 00:20:42.651 "product_name": "NVMe disk", 00:20:42.651 "block_size": 4096, 00:20:42.651 "num_blocks": 1310720, 00:20:42.651 "uuid": "9575b614-4071-477a-a9a7-946432940f13", 00:20:42.651 "numa_id": -1, 00:20:42.651 "assigned_rate_limits": { 00:20:42.651 "rw_ios_per_sec": 0, 00:20:42.651 "rw_mbytes_per_sec": 0, 00:20:42.651 "r_mbytes_per_sec": 0, 00:20:42.651 "w_mbytes_per_sec": 0 00:20:42.651 }, 00:20:42.651 "claimed": true, 00:20:42.651 "claim_type": "read_many_write_one", 00:20:42.651 "zoned": false, 00:20:42.651 "supported_io_types": { 00:20:42.651 "read": true, 00:20:42.651 "write": true, 00:20:42.651 "unmap": true, 00:20:42.651 "flush": true, 00:20:42.651 "reset": true, 00:20:42.651 "nvme_admin": true, 00:20:42.651 "nvme_io": true, 00:20:42.651 "nvme_io_md": false, 00:20:42.651 "write_zeroes": true, 00:20:42.651 "zcopy": false, 00:20:42.651 "get_zone_info": false, 00:20:42.651 "zone_management": false, 00:20:42.651 "zone_append": false, 00:20:42.651 "compare": true, 00:20:42.651 "compare_and_write": false, 00:20:42.651 "abort": true, 00:20:42.651 "seek_hole": false, 00:20:42.651 "seek_data": false, 00:20:42.651 "copy": true, 00:20:42.651 "nvme_iov_md": false 00:20:42.651 }, 00:20:42.651 "driver_specific": { 00:20:42.651 "nvme": [ 00:20:42.651 { 00:20:42.651 "pci_address": "0000:00:11.0", 00:20:42.651 "trid": { 00:20:42.651 "trtype": "PCIe", 00:20:42.651 "traddr": "0000:00:11.0" 00:20:42.651 }, 00:20:42.651 "ctrlr_data": { 00:20:42.651 "cntlid": 0, 00:20:42.651 "vendor_id": "0x1b36", 00:20:42.651 "model_number": "QEMU NVMe Ctrl", 00:20:42.651 "serial_number": "12341", 00:20:42.651 "firmware_revision": "8.0.0", 00:20:42.651 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:42.651 "oacs": { 00:20:42.651 "security": 0, 00:20:42.651 "format": 1, 00:20:42.651 "firmware": 0, 00:20:42.651 "ns_manage": 1 00:20:42.651 }, 00:20:42.651 "multi_ctrlr": false, 00:20:42.651 "ana_reporting": false 00:20:42.651 }, 00:20:42.651 "vs": { 00:20:42.651 "nvme_version": "1.4" 00:20:42.651 }, 00:20:42.651 "ns_data": { 00:20:42.651 "id": 1, 00:20:42.651 "can_share": false 00:20:42.651 } 00:20:42.651 } 00:20:42.651 ], 00:20:42.651 "mp_policy": "active_passive" 00:20:42.651 } 00:20:42.651 } 00:20:42.651 ]' 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:42.651 09:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:42.908 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c8603e5a-d534-462e-8d18-f46cca709d06 00:20:42.908 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:42.908 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8603e5a-d534-462e-8d18-f46cca709d06 00:20:43.166 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=cc063a3c-82bb-4bcc-9717-7b68e724d6e1 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cc063a3c-82bb-4bcc-9717-7b68e724d6e1 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:20:43.424 09:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:43.681 { 00:20:43.681 "name": "9a934e28-9b3c-411a-9048-8f220d60f3ba", 00:20:43.681 "aliases": [ 00:20:43.681 "lvs/nvme0n1p0" 00:20:43.681 ], 00:20:43.681 "product_name": "Logical Volume", 00:20:43.681 "block_size": 4096, 00:20:43.681 "num_blocks": 26476544, 00:20:43.681 "uuid": "9a934e28-9b3c-411a-9048-8f220d60f3ba", 00:20:43.681 "assigned_rate_limits": { 00:20:43.681 "rw_ios_per_sec": 0, 00:20:43.681 "rw_mbytes_per_sec": 0, 00:20:43.681 "r_mbytes_per_sec": 0, 00:20:43.681 "w_mbytes_per_sec": 0 00:20:43.681 }, 00:20:43.681 "claimed": false, 00:20:43.681 "zoned": false, 00:20:43.681 "supported_io_types": { 00:20:43.681 "read": true, 00:20:43.681 "write": true, 00:20:43.681 "unmap": true, 00:20:43.681 "flush": false, 00:20:43.681 "reset": true, 00:20:43.681 "nvme_admin": false, 00:20:43.681 "nvme_io": false, 00:20:43.681 "nvme_io_md": false, 00:20:43.681 "write_zeroes": true, 00:20:43.681 "zcopy": false, 00:20:43.681 "get_zone_info": false, 00:20:43.681 "zone_management": false, 00:20:43.681 "zone_append": false, 00:20:43.681 "compare": false, 00:20:43.681 "compare_and_write": false, 00:20:43.681 "abort": false, 00:20:43.681 "seek_hole": true, 00:20:43.681 "seek_data": true, 00:20:43.681 "copy": false, 00:20:43.681 "nvme_iov_md": false 00:20:43.681 }, 00:20:43.681 "driver_specific": { 00:20:43.681 "lvol": { 00:20:43.681 "lvol_store_uuid": "cc063a3c-82bb-4bcc-9717-7b68e724d6e1", 00:20:43.681 "base_bdev": "nvme0n1", 00:20:43.681 "thin_provision": true, 00:20:43.681 "num_allocated_clusters": 0, 00:20:43.681 "snapshot": false, 00:20:43.681 "clone": false, 00:20:43.681 "esnap_clone": false 00:20:43.681 } 00:20:43.681 } 00:20:43.681 } 00:20:43.681 ]' 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:43.681 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:20:43.940 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:44.197 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.197 { 00:20:44.197 "name": "9a934e28-9b3c-411a-9048-8f220d60f3ba", 00:20:44.197 "aliases": [ 00:20:44.197 "lvs/nvme0n1p0" 00:20:44.197 ], 00:20:44.197 "product_name": "Logical Volume", 00:20:44.197 "block_size": 4096, 00:20:44.197 "num_blocks": 26476544, 00:20:44.197 "uuid": "9a934e28-9b3c-411a-9048-8f220d60f3ba", 00:20:44.197 "assigned_rate_limits": { 00:20:44.197 "rw_ios_per_sec": 0, 00:20:44.197 "rw_mbytes_per_sec": 0, 00:20:44.197 "r_mbytes_per_sec": 0, 00:20:44.197 "w_mbytes_per_sec": 0 00:20:44.197 }, 00:20:44.197 "claimed": false, 00:20:44.197 "zoned": false, 00:20:44.197 "supported_io_types": { 00:20:44.197 "read": true, 00:20:44.197 "write": true, 00:20:44.198 "unmap": true, 00:20:44.198 "flush": false, 00:20:44.198 "reset": true, 00:20:44.198 "nvme_admin": false, 00:20:44.198 "nvme_io": false, 00:20:44.198 "nvme_io_md": false, 00:20:44.198 "write_zeroes": true, 00:20:44.198 "zcopy": false, 00:20:44.198 "get_zone_info": false, 00:20:44.198 "zone_management": false, 00:20:44.198 "zone_append": false, 00:20:44.198 "compare": false, 00:20:44.198 "compare_and_write": false, 00:20:44.198 "abort": false, 00:20:44.198 "seek_hole": true, 00:20:44.198 "seek_data": true, 00:20:44.198 "copy": false, 00:20:44.198 "nvme_iov_md": false 00:20:44.198 }, 00:20:44.198 "driver_specific": { 00:20:44.198 "lvol": { 00:20:44.198 "lvol_store_uuid": "cc063a3c-82bb-4bcc-9717-7b68e724d6e1", 00:20:44.198 "base_bdev": "nvme0n1", 00:20:44.198 "thin_provision": true, 00:20:44.198 "num_allocated_clusters": 0, 00:20:44.198 "snapshot": false, 00:20:44.198 "clone": false, 00:20:44.198 "esnap_clone": false 00:20:44.198 } 00:20:44.198 } 00:20:44.198 } 00:20:44.198 ]' 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:44.198 09:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:20:44.456 09:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a934e28-9b3c-411a-9048-8f220d60f3ba 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.763 { 00:20:44.763 "name": "9a934e28-9b3c-411a-9048-8f220d60f3ba", 00:20:44.763 "aliases": [ 00:20:44.763 "lvs/nvme0n1p0" 00:20:44.763 ], 00:20:44.763 "product_name": "Logical Volume", 00:20:44.763 "block_size": 4096, 00:20:44.763 "num_blocks": 26476544, 00:20:44.763 "uuid": "9a934e28-9b3c-411a-9048-8f220d60f3ba", 00:20:44.763 "assigned_rate_limits": { 00:20:44.763 "rw_ios_per_sec": 0, 00:20:44.763 "rw_mbytes_per_sec": 0, 00:20:44.763 "r_mbytes_per_sec": 0, 00:20:44.763 "w_mbytes_per_sec": 0 00:20:44.763 }, 00:20:44.763 "claimed": false, 00:20:44.763 "zoned": false, 00:20:44.763 "supported_io_types": { 00:20:44.763 "read": true, 00:20:44.763 "write": true, 00:20:44.763 "unmap": true, 00:20:44.763 "flush": false, 00:20:44.763 "reset": true, 00:20:44.763 "nvme_admin": false, 00:20:44.763 "nvme_io": false, 00:20:44.763 "nvme_io_md": false, 00:20:44.763 "write_zeroes": true, 00:20:44.763 "zcopy": false, 00:20:44.763 "get_zone_info": false, 00:20:44.763 "zone_management": false, 00:20:44.763 "zone_append": false, 00:20:44.763 "compare": false, 00:20:44.763 "compare_and_write": false, 00:20:44.763 "abort": false, 00:20:44.763 "seek_hole": true, 00:20:44.763 "seek_data": true, 00:20:44.763 "copy": false, 00:20:44.763 "nvme_iov_md": false 00:20:44.763 }, 00:20:44.763 "driver_specific": { 00:20:44.763 "lvol": { 00:20:44.763 "lvol_store_uuid": "cc063a3c-82bb-4bcc-9717-7b68e724d6e1", 00:20:44.763 "base_bdev": "nvme0n1", 00:20:44.763 "thin_provision": true, 00:20:44.763 "num_allocated_clusters": 0, 00:20:44.763 "snapshot": false, 00:20:44.763 "clone": false, 00:20:44.763 "esnap_clone": false 00:20:44.763 } 00:20:44.763 } 00:20:44.763 } 00:20:44.763 ]' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9a934e28-9b3c-411a-9048-8f220d60f3ba --l2p_dram_limit 10' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:44.763 09:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9a934e28-9b3c-411a-9048-8f220d60f3ba --l2p_dram_limit 10 -c nvc0n1p0 00:20:45.021 [2024-11-20 09:33:10.302573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.021 [2024-11-20 09:33:10.302619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.021 [2024-11-20 09:33:10.302633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.021 [2024-11-20 09:33:10.302640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.021 [2024-11-20 09:33:10.302688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.021 [2024-11-20 09:33:10.302697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.021 [2024-11-20 09:33:10.302705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:45.021 [2024-11-20 09:33:10.302711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.021 [2024-11-20 09:33:10.302731] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.021 [2024-11-20 09:33:10.303391] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.021 [2024-11-20 09:33:10.303413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.021 [2024-11-20 09:33:10.303420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.021 [2024-11-20 09:33:10.303428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:20:45.021 [2024-11-20 09:33:10.303434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.021 [2024-11-20 09:33:10.303489] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b5a80b22-005b-497e-b6ca-72bcebdf972a 00:20:45.021 [2024-11-20 09:33:10.304512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.021 [2024-11-20 09:33:10.304541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:45.021 [2024-11-20 09:33:10.304549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:45.021 [2024-11-20 09:33:10.304557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.309459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.309487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.022 [2024-11-20 09:33:10.309497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:20:45.022 [2024-11-20 09:33:10.309505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.309571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.309580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.022 [2024-11-20 09:33:10.309587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:45.022 [2024-11-20 09:33:10.309597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.309637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.309647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.022 [2024-11-20 09:33:10.309653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:45.022 [2024-11-20 09:33:10.309661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.309678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.022 [2024-11-20 09:33:10.312620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.312646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.022 [2024-11-20 09:33:10.312656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:20:45.022 [2024-11-20 09:33:10.312662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.312690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.312696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.022 [2024-11-20 09:33:10.312704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:45.022 [2024-11-20 09:33:10.312710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.312724] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:45.022 [2024-11-20 09:33:10.312831] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.022 [2024-11-20 09:33:10.312843] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.022 [2024-11-20 09:33:10.312852] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:45.022 [2024-11-20 09:33:10.312861] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.022 [2024-11-20 09:33:10.312868] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.022 [2024-11-20 09:33:10.312876] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:45.022 [2024-11-20 09:33:10.312882] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.022 [2024-11-20 09:33:10.312890] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.022 [2024-11-20 09:33:10.312896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.022 [2024-11-20 09:33:10.312903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.312909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.022 [2024-11-20 09:33:10.312916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:20:45.022 [2024-11-20 09:33:10.312927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.312994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.022 [2024-11-20 09:33:10.313000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.022 [2024-11-20 09:33:10.313007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:45.022 [2024-11-20 09:33:10.313013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.022 [2024-11-20 09:33:10.313096] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.022 [2024-11-20 09:33:10.313103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.022 [2024-11-20 09:33:10.313111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.022 [2024-11-20 09:33:10.313130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.022 [2024-11-20 09:33:10.313149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.022 [2024-11-20 09:33:10.313160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.022 [2024-11-20 09:33:10.313166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:45.022 [2024-11-20 09:33:10.313173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.022 [2024-11-20 09:33:10.313178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.022 [2024-11-20 09:33:10.313184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:45.022 [2024-11-20 09:33:10.313190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.022 [2024-11-20 09:33:10.313203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.022 [2024-11-20 09:33:10.313222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.022 [2024-11-20 09:33:10.313239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.022 [2024-11-20 09:33:10.313256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.022 [2024-11-20 09:33:10.313275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.022 [2024-11-20 09:33:10.313294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.022 [2024-11-20 09:33:10.313316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.022 [2024-11-20 09:33:10.313321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:45.022 [2024-11-20 09:33:10.313328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.022 [2024-11-20 09:33:10.313333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.022 [2024-11-20 09:33:10.313339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:45.022 [2024-11-20 09:33:10.313344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.022 [2024-11-20 09:33:10.313356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:45.022 [2024-11-20 09:33:10.313362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313367] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.022 [2024-11-20 09:33:10.313375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.022 [2024-11-20 09:33:10.313381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.022 [2024-11-20 09:33:10.313388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.022 [2024-11-20 09:33:10.313395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.022 [2024-11-20 09:33:10.313403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.022 [2024-11-20 09:33:10.313409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.022 [2024-11-20 09:33:10.313415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.023 [2024-11-20 09:33:10.313420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.023 [2024-11-20 09:33:10.313427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.023 [2024-11-20 09:33:10.313434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.023 [2024-11-20 09:33:10.313443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:45.023 [2024-11-20 09:33:10.313458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:45.023 [2024-11-20 09:33:10.313464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:45.023 [2024-11-20 09:33:10.313471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:45.023 [2024-11-20 09:33:10.313476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:45.023 [2024-11-20 09:33:10.313484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:45.023 [2024-11-20 09:33:10.313490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:45.023 [2024-11-20 09:33:10.313496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:45.023 [2024-11-20 09:33:10.313502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:45.023 [2024-11-20 09:33:10.313510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:45.023 [2024-11-20 09:33:10.313542] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.023 [2024-11-20 09:33:10.313550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.023 [2024-11-20 09:33:10.313563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.023 [2024-11-20 09:33:10.313568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.023 [2024-11-20 09:33:10.313576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.023 [2024-11-20 09:33:10.313581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.023 [2024-11-20 09:33:10.313589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.023 [2024-11-20 09:33:10.313594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:20:45.023 [2024-11-20 09:33:10.313601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.023 [2024-11-20 09:33:10.313642] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:45.023 [2024-11-20 09:33:10.313654] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:47.549 [2024-11-20 09:33:12.433781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.549 [2024-11-20 09:33:12.433842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:47.549 [2024-11-20 09:33:12.433857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2120.131 ms 00:20:47.549 [2024-11-20 09:33:12.433868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.549 [2024-11-20 09:33:12.459148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.549 [2024-11-20 09:33:12.459196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.549 [2024-11-20 09:33:12.459210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.079 ms 00:20:47.549 [2024-11-20 09:33:12.459219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.549 [2024-11-20 09:33:12.459350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.549 [2024-11-20 09:33:12.459364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:47.549 [2024-11-20 09:33:12.459372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:47.549 [2024-11-20 09:33:12.459383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.549 [2024-11-20 09:33:12.489539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.549 [2024-11-20 09:33:12.489579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.549 [2024-11-20 09:33:12.489590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.114 ms 00:20:47.549 [2024-11-20 09:33:12.489601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.489631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.489643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.550 [2024-11-20 09:33:12.489651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:47.550 [2024-11-20 09:33:12.489659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.490015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.490032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.550 [2024-11-20 09:33:12.490041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:47.550 [2024-11-20 09:33:12.490049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.490157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.490167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.550 [2024-11-20 09:33:12.490177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:47.550 [2024-11-20 09:33:12.490187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.504211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.504377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.550 [2024-11-20 09:33:12.504392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.006 ms 00:20:47.550 [2024-11-20 09:33:12.504402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.515652] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:47.550 [2024-11-20 09:33:12.518401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.518428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:47.550 [2024-11-20 09:33:12.518441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.919 ms 00:20:47.550 [2024-11-20 09:33:12.518456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.581365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.581412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:47.550 [2024-11-20 09:33:12.581426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.879 ms 00:20:47.550 [2024-11-20 09:33:12.581433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.581583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.581591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:47.550 [2024-11-20 09:33:12.581602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:47.550 [2024-11-20 09:33:12.581607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.599889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.599930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:47.550 [2024-11-20 09:33:12.599942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.253 ms 00:20:47.550 [2024-11-20 09:33:12.599949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.617428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.617461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:47.550 [2024-11-20 09:33:12.617472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.440 ms 00:20:47.550 [2024-11-20 09:33:12.617478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.617932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.617944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:47.550 [2024-11-20 09:33:12.617954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:20:47.550 [2024-11-20 09:33:12.617962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.685955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.686012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:47.550 [2024-11-20 09:33:12.686031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.960 ms 00:20:47.550 [2024-11-20 09:33:12.686040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.710038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.710075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:47.550 [2024-11-20 09:33:12.710089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.922 ms 00:20:47.550 [2024-11-20 09:33:12.710098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.733760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.733794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:47.550 [2024-11-20 09:33:12.733807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.623 ms 00:20:47.550 [2024-11-20 09:33:12.733815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.757190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.757231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:47.550 [2024-11-20 09:33:12.757244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.335 ms 00:20:47.550 [2024-11-20 09:33:12.757252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.757292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.757319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:47.550 [2024-11-20 09:33:12.757333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.550 [2024-11-20 09:33:12.757340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.757417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.550 [2024-11-20 09:33:12.757429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:47.550 [2024-11-20 09:33:12.757439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:47.550 [2024-11-20 09:33:12.757446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.550 [2024-11-20 09:33:12.758699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2455.677 ms, result 0 00:20:47.550 { 00:20:47.550 "name": "ftl0", 00:20:47.550 "uuid": "b5a80b22-005b-497e-b6ca-72bcebdf972a" 00:20:47.550 } 00:20:47.550 09:33:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:47.550 09:33:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:47.808 09:33:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:47.808 09:33:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:47.808 09:33:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:47.808 /dev/nbd0 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:48.067 1+0 records in 00:20:48.067 1+0 records out 00:20:48.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463205 s, 8.8 MB/s 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:20:48.067 09:33:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:48.067 [2024-11-20 09:33:13.348206] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:48.067 [2024-11-20 09:33:13.348340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75984 ] 00:20:48.067 [2024-11-20 09:33:13.507667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.325 [2024-11-20 09:33:13.605413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.698  [2024-11-20T09:33:16.086Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-20T09:33:17.019Z] Copying: 391/1024 [MB] (196 MBps) [2024-11-20T09:33:17.958Z] Copying: 587/1024 [MB] (196 MBps) [2024-11-20T09:33:18.907Z] Copying: 783/1024 [MB] (195 MBps) [2024-11-20T09:33:19.472Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:20:54.016 00:20:54.016 09:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:55.912 09:33:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:55.912 [2024-11-20 09:33:21.347242] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:55.912 [2024-11-20 09:33:21.347509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76073 ] 00:20:56.169 [2024-11-20 09:33:21.501064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.427 [2024-11-20 09:33:21.623477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.799  [2024-11-20T09:33:24.188Z] Copying: 35/1024 [MB] (35 MBps) [2024-11-20T09:33:25.118Z] Copying: 64/1024 [MB] (28 MBps) [2024-11-20T09:33:26.049Z] Copying: 94/1024 [MB] (29 MBps) [2024-11-20T09:33:26.978Z] Copying: 121/1024 [MB] (27 MBps) [2024-11-20T09:33:27.908Z] Copying: 150/1024 [MB] (29 MBps) [2024-11-20T09:33:29.277Z] Copying: 184/1024 [MB] (34 MBps) [2024-11-20T09:33:30.208Z] Copying: 217/1024 [MB] (33 MBps) [2024-11-20T09:33:31.137Z] Copying: 248/1024 [MB] (30 MBps) [2024-11-20T09:33:32.069Z] Copying: 279/1024 [MB] (30 MBps) [2024-11-20T09:33:33.002Z] Copying: 309/1024 [MB] (30 MBps) [2024-11-20T09:33:33.935Z] Copying: 339/1024 [MB] (30 MBps) [2024-11-20T09:33:34.929Z] Copying: 369/1024 [MB] (29 MBps) [2024-11-20T09:33:35.890Z] Copying: 400/1024 [MB] (31 MBps) [2024-11-20T09:33:37.261Z] Copying: 434/1024 [MB] (34 MBps) [2024-11-20T09:33:38.193Z] Copying: 470/1024 [MB] (35 MBps) [2024-11-20T09:33:39.222Z] Copying: 500/1024 [MB] (29 MBps) [2024-11-20T09:33:40.154Z] Copying: 530/1024 [MB] (29 MBps) [2024-11-20T09:33:41.114Z] Copying: 563/1024 [MB] (33 MBps) [2024-11-20T09:33:42.046Z] Copying: 595/1024 [MB] (32 MBps) [2024-11-20T09:33:42.977Z] Copying: 624/1024 [MB] (29 MBps) [2024-11-20T09:33:43.909Z] Copying: 655/1024 [MB] (30 MBps) [2024-11-20T09:33:45.281Z] Copying: 684/1024 [MB] (29 MBps) [2024-11-20T09:33:45.925Z] Copying: 712/1024 [MB] (27 MBps) [2024-11-20T09:33:47.294Z] Copying: 741/1024 [MB] (29 MBps) [2024-11-20T09:33:47.929Z] Copying: 773/1024 [MB] (31 MBps) [2024-11-20T09:33:48.860Z] Copying: 803/1024 [MB] (30 MBps) [2024-11-20T09:33:50.230Z] Copying: 834/1024 [MB] (30 MBps) [2024-11-20T09:33:51.162Z] Copying: 867/1024 [MB] (32 MBps) [2024-11-20T09:33:52.092Z] Copying: 898/1024 [MB] (31 MBps) [2024-11-20T09:33:53.023Z] Copying: 929/1024 [MB] (30 MBps) [2024-11-20T09:33:53.979Z] Copying: 958/1024 [MB] (29 MBps) [2024-11-20T09:33:54.912Z] Copying: 987/1024 [MB] (28 MBps) [2024-11-20T09:33:55.169Z] Copying: 1016/1024 [MB] (29 MBps) [2024-11-20T09:33:55.735Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:21:30.279 00:21:30.279 09:33:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:21:30.279 09:33:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:21:30.537 09:33:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:30.795 [2024-11-20 09:33:56.031500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.795 [2024-11-20 09:33:56.031553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:30.796 [2024-11-20 09:33:56.031567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:30.796 [2024-11-20 09:33:56.031577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.031602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:30.796 [2024-11-20 09:33:56.034262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.034420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:30.796 [2024-11-20 09:33:56.034441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:21:30.796 [2024-11-20 09:33:56.034450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.036222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.036250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:30.796 [2024-11-20 09:33:56.036262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:21:30.796 [2024-11-20 09:33:56.036270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.050913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.050945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:30.796 [2024-11-20 09:33:56.050958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:21:30.796 [2024-11-20 09:33:56.050965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.057881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.058020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:30.796 [2024-11-20 09:33:56.058039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:21:30.796 [2024-11-20 09:33:56.058048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.082039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.082094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:30.796 [2024-11-20 09:33:56.082108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.907 ms 00:21:30.796 [2024-11-20 09:33:56.082116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.096623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.096769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:30.796 [2024-11-20 09:33:56.096795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.462 ms 00:21:30.796 [2024-11-20 09:33:56.096803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.096957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.096968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:30.796 [2024-11-20 09:33:56.096979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:30.796 [2024-11-20 09:33:56.096986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.119891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.120017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:30.796 [2024-11-20 09:33:56.120037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.885 ms 00:21:30.796 [2024-11-20 09:33:56.120044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.141833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.141868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:30.796 [2024-11-20 09:33:56.141882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.752 ms 00:21:30.796 [2024-11-20 09:33:56.141889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.164115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.164154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:30.796 [2024-11-20 09:33:56.164175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.183 ms 00:21:30.796 [2024-11-20 09:33:56.164182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.186782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.796 [2024-11-20 09:33:56.186933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:30.796 [2024-11-20 09:33:56.186955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.518 ms 00:21:30.796 [2024-11-20 09:33:56.186964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.796 [2024-11-20 09:33:56.187000] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:30.796 [2024-11-20 09:33:56.187015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:30.796 [2024-11-20 09:33:56.187452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:30.797 [2024-11-20 09:33:56.187880] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:30.797 [2024-11-20 09:33:56.187889] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5a80b22-005b-497e-b6ca-72bcebdf972a 00:21:30.797 [2024-11-20 09:33:56.187897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:30.797 [2024-11-20 09:33:56.187907] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:30.797 [2024-11-20 09:33:56.187916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:30.797 [2024-11-20 09:33:56.187925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:30.797 [2024-11-20 09:33:56.187932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:30.797 [2024-11-20 09:33:56.187941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:30.797 [2024-11-20 09:33:56.187948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:30.797 [2024-11-20 09:33:56.187956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:30.797 [2024-11-20 09:33:56.187962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:30.797 [2024-11-20 09:33:56.187971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.797 [2024-11-20 09:33:56.187979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:30.797 [2024-11-20 09:33:56.187989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:21:30.797 [2024-11-20 09:33:56.187996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.797 [2024-11-20 09:33:56.200777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.797 [2024-11-20 09:33:56.200818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:30.797 [2024-11-20 09:33:56.200832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.744 ms 00:21:30.797 [2024-11-20 09:33:56.200840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.797 [2024-11-20 09:33:56.201201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.797 [2024-11-20 09:33:56.201211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:30.797 [2024-11-20 09:33:56.201221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:21:30.797 [2024-11-20 09:33:56.201229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.797 [2024-11-20 09:33:56.243087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.797 [2024-11-20 09:33:56.243140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.797 [2024-11-20 09:33:56.243154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.797 [2024-11-20 09:33:56.243163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.797 [2024-11-20 09:33:56.243234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.797 [2024-11-20 09:33:56.243242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.797 [2024-11-20 09:33:56.243252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.797 [2024-11-20 09:33:56.243259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.797 [2024-11-20 09:33:56.243410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.797 [2024-11-20 09:33:56.243423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.797 [2024-11-20 09:33:56.243433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.797 [2024-11-20 09:33:56.243441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.797 [2024-11-20 09:33:56.243462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.797 [2024-11-20 09:33:56.243470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.797 [2024-11-20 09:33:56.243479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.797 [2024-11-20 09:33:56.243486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.321771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.321830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.055 [2024-11-20 09:33:56.321843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.321850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.055 [2024-11-20 09:33:56.386409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.055 [2024-11-20 09:33:56.386531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.055 [2024-11-20 09:33:56.386622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.055 [2024-11-20 09:33:56.386742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.055 [2024-11-20 09:33:56.386801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.055 [2024-11-20 09:33:56.386861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.386914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.055 [2024-11-20 09:33:56.386923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.055 [2024-11-20 09:33:56.386933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.055 [2024-11-20 09:33:56.386940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.055 [2024-11-20 09:33:56.387061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.530 ms, result 0 00:21:31.055 true 00:21:31.055 09:33:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 75853 00:21:31.055 09:33:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75853 00:21:31.055 09:33:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:21:31.055 [2024-11-20 09:33:56.476483] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:21:31.055 [2024-11-20 09:33:56.476606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76441 ] 00:21:31.314 [2024-11-20 09:33:56.636751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.314 [2024-11-20 09:33:56.736945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.716  [2024-11-20T09:33:59.105Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-20T09:34:00.036Z] Copying: 481/1024 [MB] (256 MBps) [2024-11-20T09:34:00.969Z] Copying: 737/1024 [MB] (255 MBps) [2024-11-20T09:34:01.227Z] Copying: 995/1024 [MB] (258 MBps) [2024-11-20T09:34:01.792Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:21:36.336 00:21:36.336 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75853 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:21:36.336 09:34:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:36.336 [2024-11-20 09:34:01.707721] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:21:36.336 [2024-11-20 09:34:01.707982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76500 ] 00:21:36.594 [2024-11-20 09:34:01.868681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.594 [2024-11-20 09:34:01.970024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.852 [2024-11-20 09:34:02.222044] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.852 [2024-11-20 09:34:02.222260] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.852 [2024-11-20 09:34:02.285947] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:36.852 [2024-11-20 09:34:02.286285] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:36.852 [2024-11-20 09:34:02.286557] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:37.110 [2024-11-20 09:34:02.463686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.463896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.110 [2024-11-20 09:34:02.463919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:37.110 [2024-11-20 09:34:02.463929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.110 [2024-11-20 09:34:02.463992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.464003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.110 [2024-11-20 09:34:02.464011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:37.110 [2024-11-20 09:34:02.464018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.110 [2024-11-20 09:34:02.464039] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.110 [2024-11-20 09:34:02.464791] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.110 [2024-11-20 09:34:02.464819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.464827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.110 [2024-11-20 09:34:02.464836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:21:37.110 [2024-11-20 09:34:02.464843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.110 [2024-11-20 09:34:02.465918] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:37.110 [2024-11-20 09:34:02.478178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.478223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:37.110 [2024-11-20 09:34:02.478235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.261 ms 00:21:37.110 [2024-11-20 09:34:02.478243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.110 [2024-11-20 09:34:02.478296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.478322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:37.110 [2024-11-20 09:34:02.478331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:37.110 [2024-11-20 09:34:02.478355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.110 [2024-11-20 09:34:02.483048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.483080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.110 [2024-11-20 09:34:02.483089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:21:37.110 [2024-11-20 09:34:02.483097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.110 [2024-11-20 09:34:02.483168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.110 [2024-11-20 09:34:02.483176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.110 [2024-11-20 09:34:02.483185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:37.111 [2024-11-20 09:34:02.483192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.111 [2024-11-20 09:34:02.483233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.111 [2024-11-20 09:34:02.483246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.111 [2024-11-20 09:34:02.483254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:37.111 [2024-11-20 09:34:02.483261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.111 [2024-11-20 09:34:02.483281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.111 [2024-11-20 09:34:02.486448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.111 [2024-11-20 09:34:02.486490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.111 [2024-11-20 09:34:02.486501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:21:37.111 [2024-11-20 09:34:02.486508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.111 [2024-11-20 09:34:02.486536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.111 [2024-11-20 09:34:02.486544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.111 [2024-11-20 09:34:02.486552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:37.111 [2024-11-20 09:34:02.486559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.111 [2024-11-20 09:34:02.486578] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:37.111 [2024-11-20 09:34:02.486599] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:37.111 [2024-11-20 09:34:02.486633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:37.111 [2024-11-20 09:34:02.486648] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:37.111 [2024-11-20 09:34:02.486750] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.111 [2024-11-20 09:34:02.486760] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.111 [2024-11-20 09:34:02.486770] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.111 [2024-11-20 09:34:02.486780] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.111 [2024-11-20 09:34:02.486791] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.111 [2024-11-20 09:34:02.486799] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:37.111 [2024-11-20 09:34:02.486806] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.111 [2024-11-20 09:34:02.486813] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.111 [2024-11-20 09:34:02.486819] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.111 [2024-11-20 09:34:02.486826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.111 [2024-11-20 09:34:02.486834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.111 [2024-11-20 09:34:02.486841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:21:37.111 [2024-11-20 09:34:02.486848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.111 [2024-11-20 09:34:02.486929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.111 [2024-11-20 09:34:02.486939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.111 [2024-11-20 09:34:02.486947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:37.111 [2024-11-20 09:34:02.486953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.111 [2024-11-20 09:34:02.487054] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.111 [2024-11-20 09:34:02.487063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.111 [2024-11-20 09:34:02.487071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.111 [2024-11-20 09:34:02.487093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.111 [2024-11-20 09:34:02.487114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.111 [2024-11-20 09:34:02.487127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.111 [2024-11-20 09:34:02.487139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:37.111 [2024-11-20 09:34:02.487145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.111 [2024-11-20 09:34:02.487152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.111 [2024-11-20 09:34:02.487158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:37.111 [2024-11-20 09:34:02.487165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.111 [2024-11-20 09:34:02.487178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.111 [2024-11-20 09:34:02.487198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.111 [2024-11-20 09:34:02.487217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.111 [2024-11-20 09:34:02.487237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.111 [2024-11-20 09:34:02.487257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.111 [2024-11-20 09:34:02.487275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.111 [2024-11-20 09:34:02.487288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.111 [2024-11-20 09:34:02.487294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:37.111 [2024-11-20 09:34:02.487327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.111 [2024-11-20 09:34:02.487333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.111 [2024-11-20 09:34:02.487340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:37.111 [2024-11-20 09:34:02.487347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.111 [2024-11-20 09:34:02.487360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:37.111 [2024-11-20 09:34:02.487366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487373] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.111 [2024-11-20 09:34:02.487381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.111 [2024-11-20 09:34:02.487388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.111 [2024-11-20 09:34:02.487404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.111 [2024-11-20 09:34:02.487411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.111 [2024-11-20 09:34:02.487417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.111 [2024-11-20 09:34:02.487424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.111 [2024-11-20 09:34:02.487432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.111 [2024-11-20 09:34:02.487439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.111 [2024-11-20 09:34:02.487447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.111 [2024-11-20 09:34:02.487456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.111 [2024-11-20 09:34:02.487465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:37.111 [2024-11-20 09:34:02.487472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:37.111 [2024-11-20 09:34:02.487480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:37.111 [2024-11-20 09:34:02.487487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:37.111 [2024-11-20 09:34:02.487493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:37.111 [2024-11-20 09:34:02.487500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:37.111 [2024-11-20 09:34:02.487507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:37.111 [2024-11-20 09:34:02.487514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:37.112 [2024-11-20 09:34:02.487521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:37.112 [2024-11-20 09:34:02.487528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:37.112 [2024-11-20 09:34:02.487535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:37.112 [2024-11-20 09:34:02.487542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:37.112 [2024-11-20 09:34:02.487549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:37.112 [2024-11-20 09:34:02.487556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:37.112 [2024-11-20 09:34:02.487562] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.112 [2024-11-20 09:34:02.487570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.112 [2024-11-20 09:34:02.487578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.112 [2024-11-20 09:34:02.487585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.112 [2024-11-20 09:34:02.487592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.112 [2024-11-20 09:34:02.487599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.112 [2024-11-20 09:34:02.487606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.487613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.112 [2024-11-20 09:34:02.487620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:21:37.112 [2024-11-20 09:34:02.487627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.112 [2024-11-20 09:34:02.513376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.513536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.112 [2024-11-20 09:34:02.513554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.694 ms 00:21:37.112 [2024-11-20 09:34:02.513564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.112 [2024-11-20 09:34:02.513670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.513686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.112 [2024-11-20 09:34:02.513696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:37.112 [2024-11-20 09:34:02.513705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.112 [2024-11-20 09:34:02.557639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.557687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.112 [2024-11-20 09:34:02.557701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.869 ms 00:21:37.112 [2024-11-20 09:34:02.557712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.112 [2024-11-20 09:34:02.557767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.557776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.112 [2024-11-20 09:34:02.557785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:37.112 [2024-11-20 09:34:02.557792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.112 [2024-11-20 09:34:02.558155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.558172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.112 [2024-11-20 09:34:02.558181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:21:37.112 [2024-11-20 09:34:02.558189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.112 [2024-11-20 09:34:02.558341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.112 [2024-11-20 09:34:02.558351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.112 [2024-11-20 09:34:02.558360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:37.112 [2024-11-20 09:34:02.558367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.571627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.571767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.370 [2024-11-20 09:34:02.571783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.242 ms 00:21:37.370 [2024-11-20 09:34:02.571791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.584728] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:37.370 [2024-11-20 09:34:02.584851] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:37.370 [2024-11-20 09:34:02.584911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.584932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:37.370 [2024-11-20 09:34:02.584952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.020 ms 00:21:37.370 [2024-11-20 09:34:02.584970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.610497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.610635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:37.370 [2024-11-20 09:34:02.610698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.432 ms 00:21:37.370 [2024-11-20 09:34:02.610720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.621895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.622003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:37.370 [2024-11-20 09:34:02.622052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.131 ms 00:21:37.370 [2024-11-20 09:34:02.622074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.633596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.633738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:37.370 [2024-11-20 09:34:02.633796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.481 ms 00:21:37.370 [2024-11-20 09:34:02.633839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.634492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.634581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.370 [2024-11-20 09:34:02.634630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:21:37.370 [2024-11-20 09:34:02.634652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.689554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.689746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:37.370 [2024-11-20 09:34:02.689799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.870 ms 00:21:37.370 [2024-11-20 09:34:02.689822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.700401] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:37.370 [2024-11-20 09:34:02.703100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.703203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.370 [2024-11-20 09:34:02.703315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.226 ms 00:21:37.370 [2024-11-20 09:34:02.703338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.703449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.703854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:37.370 [2024-11-20 09:34:02.703967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:37.370 [2024-11-20 09:34:02.703992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.704149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.704239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.370 [2024-11-20 09:34:02.704282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:37.370 [2024-11-20 09:34:02.704350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.704393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.704444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.370 [2024-11-20 09:34:02.704498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:37.370 [2024-11-20 09:34:02.704520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.370 [2024-11-20 09:34:02.704597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:37.370 [2024-11-20 09:34:02.704623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.370 [2024-11-20 09:34:02.704641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:37.370 [2024-11-20 09:34:02.704692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:37.371 [2024-11-20 09:34:02.704714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.371 [2024-11-20 09:34:02.727571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.371 [2024-11-20 09:34:02.727689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.371 [2024-11-20 09:34:02.727741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.819 ms 00:21:37.371 [2024-11-20 09:34:02.727764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.371 [2024-11-20 09:34:02.727871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.371 [2024-11-20 09:34:02.727897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.371 [2024-11-20 09:34:02.727972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:37.371 [2024-11-20 09:34:02.727994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.371 [2024-11-20 09:34:02.728913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.815 ms, result 0 00:21:38.302  [2024-11-20T09:34:05.130Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-20T09:34:06.063Z] Copying: 91/1024 [MB] (45 MBps) [2024-11-20T09:34:06.995Z] Copying: 137/1024 [MB] (46 MBps) [2024-11-20T09:34:07.928Z] Copying: 178/1024 [MB] (41 MBps) [2024-11-20T09:34:08.860Z] Copying: 215/1024 [MB] (36 MBps) [2024-11-20T09:34:09.836Z] Copying: 261/1024 [MB] (45 MBps) [2024-11-20T09:34:10.768Z] Copying: 309/1024 [MB] (47 MBps) [2024-11-20T09:34:12.141Z] Copying: 356/1024 [MB] (46 MBps) [2024-11-20T09:34:13.073Z] Copying: 405/1024 [MB] (49 MBps) [2024-11-20T09:34:14.005Z] Copying: 452/1024 [MB] (46 MBps) [2024-11-20T09:34:14.938Z] Copying: 497/1024 [MB] (45 MBps) [2024-11-20T09:34:15.869Z] Copying: 541/1024 [MB] (43 MBps) [2024-11-20T09:34:16.801Z] Copying: 583/1024 [MB] (42 MBps) [2024-11-20T09:34:18.171Z] Copying: 629/1024 [MB] (45 MBps) [2024-11-20T09:34:19.104Z] Copying: 671/1024 [MB] (41 MBps) [2024-11-20T09:34:20.038Z] Copying: 717/1024 [MB] (46 MBps) [2024-11-20T09:34:20.973Z] Copying: 762/1024 [MB] (45 MBps) [2024-11-20T09:34:21.906Z] Copying: 808/1024 [MB] (46 MBps) [2024-11-20T09:34:22.893Z] Copying: 861/1024 [MB] (52 MBps) [2024-11-20T09:34:23.825Z] Copying: 914/1024 [MB] (53 MBps) [2024-11-20T09:34:24.758Z] Copying: 968/1024 [MB] (53 MBps) [2024-11-20T09:34:26.129Z] Copying: 1019/1024 [MB] (50 MBps) [2024-11-20T09:34:26.129Z] Copying: 1048424/1048576 [kB] (4700 kBps) [2024-11-20T09:34:26.129Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-20 09:34:25.923466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:25.923526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.673 [2024-11-20 09:34:25.923544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:00.673 [2024-11-20 09:34:25.923557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:25.925809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.673 [2024-11-20 09:34:25.931758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:25.931809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.673 [2024-11-20 09:34:25.931825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.901 ms 00:22:00.673 [2024-11-20 09:34:25.931837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:25.942818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:25.942941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.673 [2024-11-20 09:34:25.942963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.054 ms 00:22:00.673 [2024-11-20 09:34:25.942974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:25.960749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:25.960862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.673 [2024-11-20 09:34:25.960948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.749 ms 00:22:00.673 [2024-11-20 09:34:25.960986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:25.967717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:25.967834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.673 [2024-11-20 09:34:25.967922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.520 ms 00:22:00.673 [2024-11-20 09:34:25.967960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:25.991556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:25.991699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.673 [2024-11-20 09:34:25.991783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.456 ms 00:22:00.673 [2024-11-20 09:34:25.991820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:26.005577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:26.005701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.673 [2024-11-20 09:34:26.005776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.693 ms 00:22:00.673 [2024-11-20 09:34:26.005812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:26.062774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.673 [2024-11-20 09:34:26.062955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.673 [2024-11-20 09:34:26.063032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.891 ms 00:22:00.673 [2024-11-20 09:34:26.063080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.673 [2024-11-20 09:34:26.086716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.674 [2024-11-20 09:34:26.086850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.674 [2024-11-20 09:34:26.086918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.549 ms 00:22:00.674 [2024-11-20 09:34:26.086950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.674 [2024-11-20 09:34:26.109833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.674 [2024-11-20 09:34:26.109953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.674 [2024-11-20 09:34:26.110020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.824 ms 00:22:00.674 [2024-11-20 09:34:26.110053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.934 [2024-11-20 09:34:26.132315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.934 [2024-11-20 09:34:26.132423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.934 [2024-11-20 09:34:26.132490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.177 ms 00:22:00.934 [2024-11-20 09:34:26.132525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.934 [2024-11-20 09:34:26.154836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.934 [2024-11-20 09:34:26.154940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.934 [2024-11-20 09:34:26.155007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.223 ms 00:22:00.934 [2024-11-20 09:34:26.155042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.934 [2024-11-20 09:34:26.155121] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.934 [2024-11-20 09:34:26.155164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:22:00.934 [2024-11-20 09:34:26.155218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.155978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.156992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.934 [2024-11-20 09:34:26.157716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.157999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.935 [2024-11-20 09:34:26.158219] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.935 [2024-11-20 09:34:26.158235] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5a80b22-005b-497e-b6ca-72bcebdf972a 00:22:00.935 [2024-11-20 09:34:26.158248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:22:00.935 [2024-11-20 09:34:26.158264] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130240 00:22:00.935 [2024-11-20 09:34:26.158283] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:22:00.935 [2024-11-20 09:34:26.158297] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:22:00.935 [2024-11-20 09:34:26.158319] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.935 [2024-11-20 09:34:26.158332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.935 [2024-11-20 09:34:26.158345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.935 [2024-11-20 09:34:26.158356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.935 [2024-11-20 09:34:26.158366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.935 [2024-11-20 09:34:26.158379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-20 09:34:26.158392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.935 [2024-11-20 09:34:26.158405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:22:00.935 [2024-11-20 09:34:26.158418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.171970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-20 09:34:26.172082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.935 [2024-11-20 09:34:26.172101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.524 ms 00:22:00.935 [2024-11-20 09:34:26.172113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.172553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-20 09:34:26.172579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.935 [2024-11-20 09:34:26.172592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:22:00.935 [2024-11-20 09:34:26.172603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.205462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.205495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.935 [2024-11-20 09:34:26.205509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.205520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.205592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.205605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.935 [2024-11-20 09:34:26.205617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.205637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.205710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.205726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.935 [2024-11-20 09:34:26.205738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.205750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.205773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.205786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.935 [2024-11-20 09:34:26.205798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.205810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.283721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.283762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.935 [2024-11-20 09:34:26.283779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.283789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.347139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.347311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.935 [2024-11-20 09:34:26.347333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.347354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.347455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.347470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.935 [2024-11-20 09:34:26.347484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.347496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.347542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.347557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.935 [2024-11-20 09:34:26.347570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.347582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.347714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.347733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.935 [2024-11-20 09:34:26.347746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.347759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.347802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.935 [2024-11-20 09:34:26.347815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.935 [2024-11-20 09:34:26.347828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.935 [2024-11-20 09:34:26.347840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-20 09:34:26.347884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.936 [2024-11-20 09:34:26.347902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.936 [2024-11-20 09:34:26.347914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.936 [2024-11-20 09:34:26.347927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.936 [2024-11-20 09:34:26.347979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.936 [2024-11-20 09:34:26.347994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.936 [2024-11-20 09:34:26.348014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.936 [2024-11-20 09:34:26.348027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.936 [2024-11-20 09:34:26.348176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 425.577 ms, result 0 00:22:04.214 00:22:04.214 00:22:04.214 09:34:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:06.111 09:34:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:06.368 [2024-11-20 09:34:31.588681] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:22:06.368 [2024-11-20 09:34:31.588774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76802 ] 00:22:06.368 [2024-11-20 09:34:31.744666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.624 [2024-11-20 09:34:31.844323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.882 [2024-11-20 09:34:32.096109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.882 [2024-11-20 09:34:32.096168] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.882 [2024-11-20 09:34:32.250880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.882 [2024-11-20 09:34:32.250931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.882 [2024-11-20 09:34:32.250949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.882 [2024-11-20 09:34:32.250958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.882 [2024-11-20 09:34:32.251003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.882 [2024-11-20 09:34:32.251014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.882 [2024-11-20 09:34:32.251025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:06.882 [2024-11-20 09:34:32.251032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.882 [2024-11-20 09:34:32.251051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.882 [2024-11-20 09:34:32.251888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.882 [2024-11-20 09:34:32.251916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.882 [2024-11-20 09:34:32.251924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.882 [2024-11-20 09:34:32.251933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:22:06.882 [2024-11-20 09:34:32.251940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.882 [2024-11-20 09:34:32.253053] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.882 [2024-11-20 09:34:32.265742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.882 [2024-11-20 09:34:32.265886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.883 [2024-11-20 09:34:32.265904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.690 ms 00:22:06.883 [2024-11-20 09:34:32.265912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.265967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.265976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.883 [2024-11-20 09:34:32.265984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:06.883 [2024-11-20 09:34:32.265991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.271028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.271136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.883 [2024-11-20 09:34:32.271194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:22:06.883 [2024-11-20 09:34:32.271216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.271319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.271626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.883 [2024-11-20 09:34:32.271712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:06.883 [2024-11-20 09:34:32.271738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.271814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.271929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.883 [2024-11-20 09:34:32.271954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:06.883 [2024-11-20 09:34:32.271974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.272013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.883 [2024-11-20 09:34:32.275532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.275636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.883 [2024-11-20 09:34:32.275690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:22:06.883 [2024-11-20 09:34:32.275716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.275762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.275787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.883 [2024-11-20 09:34:32.275807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.883 [2024-11-20 09:34:32.275851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.275899] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.883 [2024-11-20 09:34:32.275934] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.883 [2024-11-20 09:34:32.276067] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.883 [2024-11-20 09:34:32.276108] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:06.883 [2024-11-20 09:34:32.276263] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.883 [2024-11-20 09:34:32.276359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.883 [2024-11-20 09:34:32.276392] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:06.883 [2024-11-20 09:34:32.276423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.883 [2024-11-20 09:34:32.276507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.883 [2024-11-20 09:34:32.276537] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:06.883 [2024-11-20 09:34:32.276556] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.883 [2024-11-20 09:34:32.276575] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.883 [2024-11-20 09:34:32.276629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.883 [2024-11-20 09:34:32.276656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.276675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.883 [2024-11-20 09:34:32.276695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:22:06.883 [2024-11-20 09:34:32.276713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.276821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.883 [2024-11-20 09:34:32.276843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.883 [2024-11-20 09:34:32.276863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:06.883 [2024-11-20 09:34:32.276881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.883 [2024-11-20 09:34:32.277028] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.883 [2024-11-20 09:34:32.277099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.883 [2024-11-20 09:34:32.277122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.883 [2024-11-20 09:34:32.277178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.883 [2024-11-20 09:34:32.277309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.883 [2024-11-20 09:34:32.277348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.883 [2024-11-20 09:34:32.277365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:06.883 [2024-11-20 09:34:32.277412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.883 [2024-11-20 09:34:32.277430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.883 [2024-11-20 09:34:32.277447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:06.883 [2024-11-20 09:34:32.277498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.883 [2024-11-20 09:34:32.277537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.883 [2024-11-20 09:34:32.277622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.883 [2024-11-20 09:34:32.277726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.883 [2024-11-20 09:34:32.277832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.883 [2024-11-20 09:34:32.277942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:06.883 [2024-11-20 09:34:32.277960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.883 [2024-11-20 09:34:32.277978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.883 [2024-11-20 09:34:32.277996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:06.883 [2024-11-20 09:34:32.278038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.883 [2024-11-20 09:34:32.278056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.883 [2024-11-20 09:34:32.278074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:06.883 [2024-11-20 09:34:32.278115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.883 [2024-11-20 09:34:32.278136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.883 [2024-11-20 09:34:32.278155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:06.883 [2024-11-20 09:34:32.278199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.884 [2024-11-20 09:34:32.278220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.884 [2024-11-20 09:34:32.278238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:06.884 [2024-11-20 09:34:32.278277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.884 [2024-11-20 09:34:32.278297] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.884 [2024-11-20 09:34:32.278327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.884 [2024-11-20 09:34:32.278346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.884 [2024-11-20 09:34:32.278440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.884 [2024-11-20 09:34:32.278463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.884 [2024-11-20 09:34:32.278491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.884 [2024-11-20 09:34:32.278509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.884 [2024-11-20 09:34:32.278528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.884 [2024-11-20 09:34:32.278575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.884 [2024-11-20 09:34:32.278597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.884 [2024-11-20 09:34:32.278617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.884 [2024-11-20 09:34:32.278647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.278704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:06.884 [2024-11-20 09:34:32.278736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:06.884 [2024-11-20 09:34:32.278764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:06.884 [2024-11-20 09:34:32.278791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:06.884 [2024-11-20 09:34:32.278842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:06.884 [2024-11-20 09:34:32.278871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:06.884 [2024-11-20 09:34:32.278899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:06.884 [2024-11-20 09:34:32.278927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:06.884 [2024-11-20 09:34:32.278983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:06.884 [2024-11-20 09:34:32.279012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.279040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.279068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.279132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.279163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:06.884 [2024-11-20 09:34:32.279190] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.884 [2024-11-20 09:34:32.279247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.279279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.884 [2024-11-20 09:34:32.279326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.884 [2024-11-20 09:34:32.279356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.884 [2024-11-20 09:34:32.279412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.884 [2024-11-20 09:34:32.279441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.884 [2024-11-20 09:34:32.279460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.884 [2024-11-20 09:34:32.279502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:22:06.884 [2024-11-20 09:34:32.279523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.884 [2024-11-20 09:34:32.305754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.884 [2024-11-20 09:34:32.305881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.884 [2024-11-20 09:34:32.305931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.094 ms 00:22:06.884 [2024-11-20 09:34:32.305952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.884 [2024-11-20 09:34:32.306043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.884 [2024-11-20 09:34:32.306052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.884 [2024-11-20 09:34:32.306060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:06.884 [2024-11-20 09:34:32.306067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.142 [2024-11-20 09:34:32.344531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.142 [2024-11-20 09:34:32.344661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.142 [2024-11-20 09:34:32.344681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.407 ms 00:22:07.142 [2024-11-20 09:34:32.344689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.142 [2024-11-20 09:34:32.344734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.142 [2024-11-20 09:34:32.344744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.143 [2024-11-20 09:34:32.344752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.143 [2024-11-20 09:34:32.344764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.345120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.345136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.143 [2024-11-20 09:34:32.345145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:22:07.143 [2024-11-20 09:34:32.345153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.345280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.345289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.143 [2024-11-20 09:34:32.345323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:07.143 [2024-11-20 09:34:32.345336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.358625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.358658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.143 [2024-11-20 09:34:32.358671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.271 ms 00:22:07.143 [2024-11-20 09:34:32.358678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.371313] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:07.143 [2024-11-20 09:34:32.371441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.143 [2024-11-20 09:34:32.371461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.371474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.143 [2024-11-20 09:34:32.371484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.688 ms 00:22:07.143 [2024-11-20 09:34:32.371491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.395529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.395571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.143 [2024-11-20 09:34:32.395581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.003 ms 00:22:07.143 [2024-11-20 09:34:32.395589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.406982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.407108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.143 [2024-11-20 09:34:32.407123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.369 ms 00:22:07.143 [2024-11-20 09:34:32.407130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.418239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.418384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.143 [2024-11-20 09:34:32.418400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.079 ms 00:22:07.143 [2024-11-20 09:34:32.418408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.419022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.419043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.143 [2024-11-20 09:34:32.419052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:22:07.143 [2024-11-20 09:34:32.419062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.474386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.474439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:07.143 [2024-11-20 09:34:32.474464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.305 ms 00:22:07.143 [2024-11-20 09:34:32.474480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.485471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:07.143 [2024-11-20 09:34:32.487883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.487915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.143 [2024-11-20 09:34:32.487928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.363 ms 00:22:07.143 [2024-11-20 09:34:32.487938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.488032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.488044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:07.143 [2024-11-20 09:34:32.488052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:07.143 [2024-11-20 09:34:32.488062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.489563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.489596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.143 [2024-11-20 09:34:32.489606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:22:07.143 [2024-11-20 09:34:32.489614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.489638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.489647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.143 [2024-11-20 09:34:32.489655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.143 [2024-11-20 09:34:32.489662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.489698] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:07.143 [2024-11-20 09:34:32.489710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.489718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:07.143 [2024-11-20 09:34:32.489726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:07.143 [2024-11-20 09:34:32.489734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.514118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.514171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.143 [2024-11-20 09:34:32.514183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.365 ms 00:22:07.143 [2024-11-20 09:34:32.514196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.514268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.143 [2024-11-20 09:34:32.514279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.143 [2024-11-20 09:34:32.514288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:07.143 [2024-11-20 09:34:32.514295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.143 [2024-11-20 09:34:32.515287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.969 ms, result 0 00:22:08.516  [2024-11-20T09:34:34.906Z] Copying: 1324/1048576 [kB] (1324 kBps) [2024-11-20T09:34:35.898Z] Copying: 13/1024 [MB] (11 MBps) [2024-11-20T09:34:36.832Z] Copying: 66/1024 [MB] (53 MBps) [2024-11-20T09:34:37.764Z] Copying: 118/1024 [MB] (51 MBps) [2024-11-20T09:34:39.137Z] Copying: 172/1024 [MB] (53 MBps) [2024-11-20T09:34:39.714Z] Copying: 226/1024 [MB] (53 MBps) [2024-11-20T09:34:41.085Z] Copying: 279/1024 [MB] (53 MBps) [2024-11-20T09:34:42.020Z] Copying: 335/1024 [MB] (56 MBps) [2024-11-20T09:34:42.954Z] Copying: 389/1024 [MB] (53 MBps) [2024-11-20T09:34:43.926Z] Copying: 442/1024 [MB] (53 MBps) [2024-11-20T09:34:44.861Z] Copying: 497/1024 [MB] (55 MBps) [2024-11-20T09:34:45.793Z] Copying: 552/1024 [MB] (55 MBps) [2024-11-20T09:34:46.727Z] Copying: 607/1024 [MB] (54 MBps) [2024-11-20T09:34:48.129Z] Copying: 664/1024 [MB] (56 MBps) [2024-11-20T09:34:49.063Z] Copying: 716/1024 [MB] (52 MBps) [2024-11-20T09:34:50.000Z] Copying: 750/1024 [MB] (33 MBps) [2024-11-20T09:34:50.933Z] Copying: 791/1024 [MB] (40 MBps) [2024-11-20T09:34:51.892Z] Copying: 831/1024 [MB] (40 MBps) [2024-11-20T09:34:52.823Z] Copying: 883/1024 [MB] (52 MBps) [2024-11-20T09:34:53.756Z] Copying: 936/1024 [MB] (52 MBps) [2024-11-20T09:34:54.689Z] Copying: 989/1024 [MB] (53 MBps) [2024-11-20T09:34:54.689Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 09:34:54.619852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.233 [2024-11-20 09:34:54.619934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:29.233 [2024-11-20 09:34:54.619972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:29.233 [2024-11-20 09:34:54.619986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.233 [2024-11-20 09:34:54.620022] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:29.233 [2024-11-20 09:34:54.624630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.233 [2024-11-20 09:34:54.624661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:29.233 [2024-11-20 09:34:54.624674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.583 ms 00:22:29.233 [2024-11-20 09:34:54.624684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.233 [2024-11-20 09:34:54.624959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.233 [2024-11-20 09:34:54.624972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:29.233 [2024-11-20 09:34:54.624986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:29.233 [2024-11-20 09:34:54.624995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.233 [2024-11-20 09:34:54.635517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.233 [2024-11-20 09:34:54.635619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:29.233 [2024-11-20 09:34:54.635677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.505 ms 00:22:29.233 [2024-11-20 09:34:54.635701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.233 [2024-11-20 09:34:54.641865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.233 [2024-11-20 09:34:54.641960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:29.233 [2024-11-20 09:34:54.642014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:22:29.233 [2024-11-20 09:34:54.642044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.234 [2024-11-20 09:34:54.665738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.234 [2024-11-20 09:34:54.665860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:29.234 [2024-11-20 09:34:54.665916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.602 ms 00:22:29.234 [2024-11-20 09:34:54.665938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.234 [2024-11-20 09:34:54.678909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.234 [2024-11-20 09:34:54.679021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:29.234 [2024-11-20 09:34:54.679077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:22:29.234 [2024-11-20 09:34:54.679102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.234 [2024-11-20 09:34:54.681213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.234 [2024-11-20 09:34:54.681343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:29.234 [2024-11-20 09:34:54.681401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:22:29.234 [2024-11-20 09:34:54.681424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.492 [2024-11-20 09:34:54.704467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.492 [2024-11-20 09:34:54.704562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:29.492 [2024-11-20 09:34:54.704608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.008 ms 00:22:29.492 [2024-11-20 09:34:54.704629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.493 [2024-11-20 09:34:54.727378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.493 [2024-11-20 09:34:54.727471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:29.493 [2024-11-20 09:34:54.727526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.719 ms 00:22:29.493 [2024-11-20 09:34:54.727547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.493 [2024-11-20 09:34:54.749641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.493 [2024-11-20 09:34:54.749734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:29.493 [2024-11-20 09:34:54.749778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.065 ms 00:22:29.493 [2024-11-20 09:34:54.749799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.493 [2024-11-20 09:34:54.771668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.493 [2024-11-20 09:34:54.771769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:29.493 [2024-11-20 09:34:54.771813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.818 ms 00:22:29.493 [2024-11-20 09:34:54.771834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.493 [2024-11-20 09:34:54.771862] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:29.493 [2024-11-20 09:34:54.771886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:29.493 [2024-11-20 09:34:54.771917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:22:29.493 [2024-11-20 09:34:54.771945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.771972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.772997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:29.493 [2024-11-20 09:34:54.773418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:29.494 [2024-11-20 09:34:54.773677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:29.494 [2024-11-20 09:34:54.773685] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5a80b22-005b-497e-b6ca-72bcebdf972a 00:22:29.494 [2024-11-20 09:34:54.773692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:22:29.494 [2024-11-20 09:34:54.773699] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135360 00:22:29.494 [2024-11-20 09:34:54.773706] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133376 00:22:29.494 [2024-11-20 09:34:54.773718] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:22:29.494 [2024-11-20 09:34:54.773725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:29.494 [2024-11-20 09:34:54.773732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:29.494 [2024-11-20 09:34:54.773739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:29.494 [2024-11-20 09:34:54.773751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:29.494 [2024-11-20 09:34:54.773757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:29.494 [2024-11-20 09:34:54.773764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.494 [2024-11-20 09:34:54.773774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:29.494 [2024-11-20 09:34:54.773782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.903 ms 00:22:29.494 [2024-11-20 09:34:54.773789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.785979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.494 [2024-11-20 09:34:54.786072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:29.494 [2024-11-20 09:34:54.786121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.173 ms 00:22:29.494 [2024-11-20 09:34:54.786142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.786529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.494 [2024-11-20 09:34:54.786608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:29.494 [2024-11-20 09:34:54.786651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:22:29.494 [2024-11-20 09:34:54.786672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.819052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.494 [2024-11-20 09:34:54.819146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.494 [2024-11-20 09:34:54.819191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.494 [2024-11-20 09:34:54.819212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.819275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.494 [2024-11-20 09:34:54.819296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.494 [2024-11-20 09:34:54.819327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.494 [2024-11-20 09:34:54.819345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.819409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.494 [2024-11-20 09:34:54.819438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.494 [2024-11-20 09:34:54.819457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.494 [2024-11-20 09:34:54.819504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.819533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.494 [2024-11-20 09:34:54.819554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.494 [2024-11-20 09:34:54.819572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.494 [2024-11-20 09:34:54.819590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.494 [2024-11-20 09:34:54.896927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.494 [2024-11-20 09:34:54.897054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.494 [2024-11-20 09:34:54.897106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.494 [2024-11-20 09:34:54.897128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.959252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.959390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.755 [2024-11-20 09:34:54.959442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.959465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.959563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.959707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.755 [2024-11-20 09:34:54.959753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.959821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.959866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.959875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.755 [2024-11-20 09:34:54.959883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.959890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.959979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.959988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.755 [2024-11-20 09:34:54.959996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.960006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.960032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.960041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:29.755 [2024-11-20 09:34:54.960049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.960055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.960087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.960095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.755 [2024-11-20 09:34:54.960103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.960112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.960148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.755 [2024-11-20 09:34:54.960157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.755 [2024-11-20 09:34:54.960165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.755 [2024-11-20 09:34:54.960172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.755 [2024-11-20 09:34:54.960274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.413 ms, result 0 00:22:30.334 00:22:30.334 00:22:30.334 09:34:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:32.858 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:32.858 09:34:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:32.858 [2024-11-20 09:34:57.809811] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:22:32.858 [2024-11-20 09:34:57.810062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77078 ] 00:22:32.858 [2024-11-20 09:34:57.969796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.858 [2024-11-20 09:34:58.067074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.117 [2024-11-20 09:34:58.319812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:33.117 [2024-11-20 09:34:58.320026] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:33.117 [2024-11-20 09:34:58.473145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.117 [2024-11-20 09:34:58.473192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:33.117 [2024-11-20 09:34:58.473210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:33.117 [2024-11-20 09:34:58.473218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.117 [2024-11-20 09:34:58.473265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.117 [2024-11-20 09:34:58.473275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:33.117 [2024-11-20 09:34:58.473285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:33.117 [2024-11-20 09:34:58.473293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.117 [2024-11-20 09:34:58.473327] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:33.117 [2024-11-20 09:34:58.473986] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:33.117 [2024-11-20 09:34:58.474007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.117 [2024-11-20 09:34:58.474015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:33.117 [2024-11-20 09:34:58.474023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:22:33.117 [2024-11-20 09:34:58.474031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.117 [2024-11-20 09:34:58.475156] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:33.117 [2024-11-20 09:34:58.487377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.117 [2024-11-20 09:34:58.487410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:33.117 [2024-11-20 09:34:58.487422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.222 ms 00:22:33.117 [2024-11-20 09:34:58.487429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.117 [2024-11-20 09:34:58.487485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.117 [2024-11-20 09:34:58.487494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:33.117 [2024-11-20 09:34:58.487502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:33.117 [2024-11-20 09:34:58.487509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.117 [2024-11-20 09:34:58.492249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.117 [2024-11-20 09:34:58.492285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:33.117 [2024-11-20 09:34:58.492323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:22:33.117 [2024-11-20 09:34:58.492335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.117 [2024-11-20 09:34:58.492410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.118 [2024-11-20 09:34:58.492418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:33.118 [2024-11-20 09:34:58.492426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:33.118 [2024-11-20 09:34:58.492433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.118 [2024-11-20 09:34:58.492473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.118 [2024-11-20 09:34:58.492482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:33.118 [2024-11-20 09:34:58.492489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:33.118 [2024-11-20 09:34:58.492497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.118 [2024-11-20 09:34:58.492516] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:33.118 [2024-11-20 09:34:58.495736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.118 [2024-11-20 09:34:58.495764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:33.118 [2024-11-20 09:34:58.495773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:22:33.118 [2024-11-20 09:34:58.495782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.118 [2024-11-20 09:34:58.495808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.118 [2024-11-20 09:34:58.495816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:33.118 [2024-11-20 09:34:58.495824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:33.118 [2024-11-20 09:34:58.495830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.118 [2024-11-20 09:34:58.495849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:33.118 [2024-11-20 09:34:58.495866] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:33.118 [2024-11-20 09:34:58.495899] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:33.118 [2024-11-20 09:34:58.495915] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:33.118 [2024-11-20 09:34:58.496018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:33.118 [2024-11-20 09:34:58.496028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:33.118 [2024-11-20 09:34:58.496038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:33.118 [2024-11-20 09:34:58.496047] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496056] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496064] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:33.118 [2024-11-20 09:34:58.496072] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:33.118 [2024-11-20 09:34:58.496079] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:33.118 [2024-11-20 09:34:58.496085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:33.118 [2024-11-20 09:34:58.496094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.118 [2024-11-20 09:34:58.496101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:33.118 [2024-11-20 09:34:58.496109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:22:33.118 [2024-11-20 09:34:58.496116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.118 [2024-11-20 09:34:58.496197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.118 [2024-11-20 09:34:58.496205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:33.118 [2024-11-20 09:34:58.496212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:33.118 [2024-11-20 09:34:58.496219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.118 [2024-11-20 09:34:58.496336] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:33.118 [2024-11-20 09:34:58.496349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:33.118 [2024-11-20 09:34:58.496357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:33.118 [2024-11-20 09:34:58.496379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:33.118 [2024-11-20 09:34:58.496400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:33.118 [2024-11-20 09:34:58.496412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:33.118 [2024-11-20 09:34:58.496418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:33.118 [2024-11-20 09:34:58.496424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:33.118 [2024-11-20 09:34:58.496431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:33.118 [2024-11-20 09:34:58.496437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:33.118 [2024-11-20 09:34:58.496449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:33.118 [2024-11-20 09:34:58.496462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:33.118 [2024-11-20 09:34:58.496484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:33.118 [2024-11-20 09:34:58.496503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:33.118 [2024-11-20 09:34:58.496522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:33.118 [2024-11-20 09:34:58.496540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:33.118 [2024-11-20 09:34:58.496559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:33.118 [2024-11-20 09:34:58.496572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:33.118 [2024-11-20 09:34:58.496578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:33.118 [2024-11-20 09:34:58.496584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:33.118 [2024-11-20 09:34:58.496591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:33.118 [2024-11-20 09:34:58.496598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:33.118 [2024-11-20 09:34:58.496604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:33.118 [2024-11-20 09:34:58.496617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:33.118 [2024-11-20 09:34:58.496624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496630] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:33.118 [2024-11-20 09:34:58.496637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:33.118 [2024-11-20 09:34:58.496644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:33.118 [2024-11-20 09:34:58.496651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.118 [2024-11-20 09:34:58.496658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:33.118 [2024-11-20 09:34:58.496665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:33.119 [2024-11-20 09:34:58.496671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:33.119 [2024-11-20 09:34:58.496679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:33.119 [2024-11-20 09:34:58.496685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:33.119 [2024-11-20 09:34:58.496692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:33.119 [2024-11-20 09:34:58.496700] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:33.119 [2024-11-20 09:34:58.496708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:33.119 [2024-11-20 09:34:58.496723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:33.119 [2024-11-20 09:34:58.496730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:33.119 [2024-11-20 09:34:58.496737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:33.119 [2024-11-20 09:34:58.496743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:33.119 [2024-11-20 09:34:58.496750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:33.119 [2024-11-20 09:34:58.496757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:33.119 [2024-11-20 09:34:58.496763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:33.119 [2024-11-20 09:34:58.496770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:33.119 [2024-11-20 09:34:58.496777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:33.119 [2024-11-20 09:34:58.496811] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:33.119 [2024-11-20 09:34:58.496821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:33.119 [2024-11-20 09:34:58.496836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:33.119 [2024-11-20 09:34:58.496843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:33.119 [2024-11-20 09:34:58.496849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:33.119 [2024-11-20 09:34:58.496856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.119 [2024-11-20 09:34:58.496864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:33.119 [2024-11-20 09:34:58.496871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:22:33.119 [2024-11-20 09:34:58.496878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.119 [2024-11-20 09:34:58.522332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.119 [2024-11-20 09:34:58.522365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:33.119 [2024-11-20 09:34:58.522375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.404 ms 00:22:33.119 [2024-11-20 09:34:58.522383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.119 [2024-11-20 09:34:58.522464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.119 [2024-11-20 09:34:58.522473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:33.119 [2024-11-20 09:34:58.522481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:33.119 [2024-11-20 09:34:58.522494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.569536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.569575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.378 [2024-11-20 09:34:58.569587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.995 ms 00:22:33.378 [2024-11-20 09:34:58.569595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.569635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.569644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.378 [2024-11-20 09:34:58.569652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:33.378 [2024-11-20 09:34:58.569663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.570013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.570028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.378 [2024-11-20 09:34:58.570037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:22:33.378 [2024-11-20 09:34:58.570044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.570160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.570170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.378 [2024-11-20 09:34:58.570178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:33.378 [2024-11-20 09:34:58.570190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.583180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.583210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.378 [2024-11-20 09:34:58.583222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.970 ms 00:22:33.378 [2024-11-20 09:34:58.583230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.595257] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:33.378 [2024-11-20 09:34:58.595392] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:33.378 [2024-11-20 09:34:58.595407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.595416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:33.378 [2024-11-20 09:34:58.595424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.075 ms 00:22:33.378 [2024-11-20 09:34:58.595431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.619294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.619348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:33.378 [2024-11-20 09:34:58.619358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.830 ms 00:22:33.378 [2024-11-20 09:34:58.619365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.630291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.630326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:33.378 [2024-11-20 09:34:58.630335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.893 ms 00:22:33.378 [2024-11-20 09:34:58.630342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.641350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.641462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:33.378 [2024-11-20 09:34:58.641476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.977 ms 00:22:33.378 [2024-11-20 09:34:58.641483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.642071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.642090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:33.378 [2024-11-20 09:34:58.642098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:22:33.378 [2024-11-20 09:34:58.642108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.695399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.695459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:33.378 [2024-11-20 09:34:58.695475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.274 ms 00:22:33.378 [2024-11-20 09:34:58.695483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.705613] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:33.378 [2024-11-20 09:34:58.707886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.378 [2024-11-20 09:34:58.707914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:33.378 [2024-11-20 09:34:58.707926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.356 ms 00:22:33.378 [2024-11-20 09:34:58.707935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.378 [2024-11-20 09:34:58.708020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.379 [2024-11-20 09:34:58.708030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:33.379 [2024-11-20 09:34:58.708040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:33.379 [2024-11-20 09:34:58.708050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.379 [2024-11-20 09:34:58.708626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.379 [2024-11-20 09:34:58.708650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:33.379 [2024-11-20 09:34:58.708660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:22:33.379 [2024-11-20 09:34:58.708667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.379 [2024-11-20 09:34:58.708690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.379 [2024-11-20 09:34:58.708702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:33.379 [2024-11-20 09:34:58.708710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:33.379 [2024-11-20 09:34:58.708717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.379 [2024-11-20 09:34:58.708750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:33.379 [2024-11-20 09:34:58.708762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.379 [2024-11-20 09:34:58.708769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:33.379 [2024-11-20 09:34:58.708777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:33.379 [2024-11-20 09:34:58.708784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.379 [2024-11-20 09:34:58.731175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.379 [2024-11-20 09:34:58.731206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:33.379 [2024-11-20 09:34:58.731217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:22:33.379 [2024-11-20 09:34:58.731230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.379 [2024-11-20 09:34:58.731295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.379 [2024-11-20 09:34:58.731319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:33.379 [2024-11-20 09:34:58.731327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:33.379 [2024-11-20 09:34:58.731334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.379 [2024-11-20 09:34:58.732241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.687 ms, result 0 00:22:34.759  [2024-11-20T09:35:01.148Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-20T09:35:02.083Z] Copying: 91/1024 [MB] (45 MBps) [2024-11-20T09:35:03.026Z] Copying: 137/1024 [MB] (45 MBps) [2024-11-20T09:35:03.959Z] Copying: 183/1024 [MB] (46 MBps) [2024-11-20T09:35:05.328Z] Copying: 230/1024 [MB] (46 MBps) [2024-11-20T09:35:06.261Z] Copying: 276/1024 [MB] (46 MBps) [2024-11-20T09:35:07.204Z] Copying: 324/1024 [MB] (47 MBps) [2024-11-20T09:35:08.139Z] Copying: 372/1024 [MB] (47 MBps) [2024-11-20T09:35:09.072Z] Copying: 418/1024 [MB] (46 MBps) [2024-11-20T09:35:10.001Z] Copying: 467/1024 [MB] (49 MBps) [2024-11-20T09:35:10.932Z] Copying: 515/1024 [MB] (48 MBps) [2024-11-20T09:35:12.302Z] Copying: 563/1024 [MB] (47 MBps) [2024-11-20T09:35:13.234Z] Copying: 612/1024 [MB] (49 MBps) [2024-11-20T09:35:14.167Z] Copying: 661/1024 [MB] (48 MBps) [2024-11-20T09:35:15.099Z] Copying: 708/1024 [MB] (47 MBps) [2024-11-20T09:35:16.033Z] Copying: 755/1024 [MB] (46 MBps) [2024-11-20T09:35:16.965Z] Copying: 803/1024 [MB] (48 MBps) [2024-11-20T09:35:18.337Z] Copying: 853/1024 [MB] (50 MBps) [2024-11-20T09:35:19.268Z] Copying: 897/1024 [MB] (43 MBps) [2024-11-20T09:35:20.199Z] Copying: 946/1024 [MB] (49 MBps) [2024-11-20T09:35:20.800Z] Copying: 996/1024 [MB] (49 MBps) [2024-11-20T09:35:20.800Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 09:35:20.657444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.344 [2024-11-20 09:35:20.657509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.344 [2024-11-20 09:35:20.657525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:55.344 [2024-11-20 09:35:20.657534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.657560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.345 [2024-11-20 09:35:20.661397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.661435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.345 [2024-11-20 09:35:20.661454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.819 ms 00:22:55.345 [2024-11-20 09:35:20.661464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.661726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.661738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.345 [2024-11-20 09:35:20.661747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:22:55.345 [2024-11-20 09:35:20.661756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.666000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.666038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.345 [2024-11-20 09:35:20.666050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.228 ms 00:22:55.345 [2024-11-20 09:35:20.666059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.672941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.673096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.345 [2024-11-20 09:35:20.673112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:22:55.345 [2024-11-20 09:35:20.673120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.696740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.696772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.345 [2024-11-20 09:35:20.696782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.558 ms 00:22:55.345 [2024-11-20 09:35:20.696789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.710492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.710536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.345 [2024-11-20 09:35:20.710547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.684 ms 00:22:55.345 [2024-11-20 09:35:20.710554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.712282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.712337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.345 [2024-11-20 09:35:20.712347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:22:55.345 [2024-11-20 09:35:20.712354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.734978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.735009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:55.345 [2024-11-20 09:35:20.735019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.609 ms 00:22:55.345 [2024-11-20 09:35:20.735027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.757681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.757719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:55.345 [2024-11-20 09:35:20.757730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.636 ms 00:22:55.345 [2024-11-20 09:35:20.757738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.345 [2024-11-20 09:35:20.779477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.345 [2024-11-20 09:35:20.779621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.345 [2024-11-20 09:35:20.779638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.720 ms 00:22:55.345 [2024-11-20 09:35:20.779645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.613 [2024-11-20 09:35:20.801980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.613 [2024-11-20 09:35:20.802013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.613 [2024-11-20 09:35:20.802024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.290 ms 00:22:55.613 [2024-11-20 09:35:20.802031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.613 [2024-11-20 09:35:20.802049] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.613 [2024-11-20 09:35:20.802062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:55.613 [2024-11-20 09:35:20.802077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:22:55.613 [2024-11-20 09:35:20.802086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.613 [2024-11-20 09:35:20.802684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.614 [2024-11-20 09:35:20.802884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.614 [2024-11-20 09:35:20.802896] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5a80b22-005b-497e-b6ca-72bcebdf972a 00:22:55.614 [2024-11-20 09:35:20.802903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:22:55.614 [2024-11-20 09:35:20.802910] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:55.614 [2024-11-20 09:35:20.802917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:55.614 [2024-11-20 09:35:20.802925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:55.614 [2024-11-20 09:35:20.802932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.614 [2024-11-20 09:35:20.802939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.614 [2024-11-20 09:35:20.802953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.614 [2024-11-20 09:35:20.802959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.614 [2024-11-20 09:35:20.802966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.614 [2024-11-20 09:35:20.802973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.614 [2024-11-20 09:35:20.802981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.614 [2024-11-20 09:35:20.802989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:22:55.614 [2024-11-20 09:35:20.802995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.815360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.614 [2024-11-20 09:35:20.815398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:55.614 [2024-11-20 09:35:20.815410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.346 ms 00:22:55.614 [2024-11-20 09:35:20.815417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.815762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.614 [2024-11-20 09:35:20.815770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:55.614 [2024-11-20 09:35:20.815783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:55.614 [2024-11-20 09:35:20.815790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.847877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.847919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.614 [2024-11-20 09:35:20.847930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.847937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.847994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.848001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.614 [2024-11-20 09:35:20.848013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.848020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.848075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.848085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.614 [2024-11-20 09:35:20.848092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.848099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.848113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.848120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.614 [2024-11-20 09:35:20.848127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.848137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.923616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.923667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.614 [2024-11-20 09:35:20.923679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.923686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.985981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.614 [2024-11-20 09:35:20.986178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.614 [2024-11-20 09:35:20.986279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.614 [2024-11-20 09:35:20.986359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.614 [2024-11-20 09:35:20.986480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.614 [2024-11-20 09:35:20.986541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.614 [2024-11-20 09:35:20.986597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.614 [2024-11-20 09:35:20.986650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.614 [2024-11-20 09:35:20.986657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.614 [2024-11-20 09:35:20.986664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.614 [2024-11-20 09:35:20.986769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.306 ms, result 0 00:22:56.545 00:22:56.545 00:22:56.545 09:35:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:58.438 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:22:58.438 09:35:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:22:58.438 09:35:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:22:58.438 09:35:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.438 09:35:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:58.695 09:35:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:58.695 Process with pid 75853 is not found 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 75853 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 75853 ']' 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 75853 00:22:58.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75853) - No such process 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 75853 is not found' 00:22:58.695 09:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:22:58.954 Remove shared memory files 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:58.954 ************************************ 00:22:58.954 END TEST ftl_dirty_shutdown 00:22:58.954 ************************************ 00:22:58.954 00:22:58.954 real 2m17.944s 00:22:58.954 user 2m35.529s 00:22:58.954 sys 0m23.078s 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.954 09:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:58.954 09:35:24 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:58.954 09:35:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:58.954 09:35:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.954 09:35:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:58.954 ************************************ 00:22:58.954 START TEST ftl_upgrade_shutdown 00:22:58.954 ************************************ 00:22:58.954 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:59.212 * Looking for test storage... 00:22:59.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.212 --rc genhtml_branch_coverage=1 00:22:59.212 --rc genhtml_function_coverage=1 00:22:59.212 --rc genhtml_legend=1 00:22:59.212 --rc geninfo_all_blocks=1 00:22:59.212 --rc geninfo_unexecuted_blocks=1 00:22:59.212 00:22:59.212 ' 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.212 --rc genhtml_branch_coverage=1 00:22:59.212 --rc genhtml_function_coverage=1 00:22:59.212 --rc genhtml_legend=1 00:22:59.212 --rc geninfo_all_blocks=1 00:22:59.212 --rc geninfo_unexecuted_blocks=1 00:22:59.212 00:22:59.212 ' 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.212 --rc genhtml_branch_coverage=1 00:22:59.212 --rc genhtml_function_coverage=1 00:22:59.212 --rc genhtml_legend=1 00:22:59.212 --rc geninfo_all_blocks=1 00:22:59.212 --rc geninfo_unexecuted_blocks=1 00:22:59.212 00:22:59.212 ' 00:22:59.212 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.212 --rc genhtml_branch_coverage=1 00:22:59.212 --rc genhtml_function_coverage=1 00:22:59.212 --rc genhtml_legend=1 00:22:59.212 --rc geninfo_all_blocks=1 00:22:59.212 --rc geninfo_unexecuted_blocks=1 00:22:59.212 00:22:59.212 ' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77422 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77422 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77422 ']' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.213 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.213 [2024-11-20 09:35:24.582290] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:22:59.213 [2024-11-20 09:35:24.582583] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77422 ] 00:22:59.471 [2024-11-20 09:35:24.741857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.471 [2024-11-20 09:35:24.844614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:00.036 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:23:00.293 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:23:00.293 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:00.551 { 00:23:00.551 "name": "basen1", 00:23:00.551 "aliases": [ 00:23:00.551 "63da25a0-ba25-4d1a-be68-a9171cc52ddc" 00:23:00.551 ], 00:23:00.551 "product_name": "NVMe disk", 00:23:00.551 "block_size": 4096, 00:23:00.551 "num_blocks": 1310720, 00:23:00.551 "uuid": "63da25a0-ba25-4d1a-be68-a9171cc52ddc", 00:23:00.551 "numa_id": -1, 00:23:00.551 "assigned_rate_limits": { 00:23:00.551 "rw_ios_per_sec": 0, 00:23:00.551 "rw_mbytes_per_sec": 0, 00:23:00.551 "r_mbytes_per_sec": 0, 00:23:00.551 "w_mbytes_per_sec": 0 00:23:00.551 }, 00:23:00.551 "claimed": true, 00:23:00.551 "claim_type": "read_many_write_one", 00:23:00.551 "zoned": false, 00:23:00.551 "supported_io_types": { 00:23:00.551 "read": true, 00:23:00.551 "write": true, 00:23:00.551 "unmap": true, 00:23:00.551 "flush": true, 00:23:00.551 "reset": true, 00:23:00.551 "nvme_admin": true, 00:23:00.551 "nvme_io": true, 00:23:00.551 "nvme_io_md": false, 00:23:00.551 "write_zeroes": true, 00:23:00.551 "zcopy": false, 00:23:00.551 "get_zone_info": false, 00:23:00.551 "zone_management": false, 00:23:00.551 "zone_append": false, 00:23:00.551 "compare": true, 00:23:00.551 "compare_and_write": false, 00:23:00.551 "abort": true, 00:23:00.551 "seek_hole": false, 00:23:00.551 "seek_data": false, 00:23:00.551 "copy": true, 00:23:00.551 "nvme_iov_md": false 00:23:00.551 }, 00:23:00.551 "driver_specific": { 00:23:00.551 "nvme": [ 00:23:00.551 { 00:23:00.551 "pci_address": "0000:00:11.0", 00:23:00.551 "trid": { 00:23:00.551 "trtype": "PCIe", 00:23:00.551 "traddr": "0000:00:11.0" 00:23:00.551 }, 00:23:00.551 "ctrlr_data": { 00:23:00.551 "cntlid": 0, 00:23:00.551 "vendor_id": "0x1b36", 00:23:00.551 "model_number": "QEMU NVMe Ctrl", 00:23:00.551 "serial_number": "12341", 00:23:00.551 "firmware_revision": "8.0.0", 00:23:00.551 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:00.551 "oacs": { 00:23:00.551 "security": 0, 00:23:00.551 "format": 1, 00:23:00.551 "firmware": 0, 00:23:00.551 "ns_manage": 1 00:23:00.551 }, 00:23:00.551 "multi_ctrlr": false, 00:23:00.551 "ana_reporting": false 00:23:00.551 }, 00:23:00.551 "vs": { 00:23:00.551 "nvme_version": "1.4" 00:23:00.551 }, 00:23:00.551 "ns_data": { 00:23:00.551 "id": 1, 00:23:00.551 "can_share": false 00:23:00.551 } 00:23:00.551 } 00:23:00.551 ], 00:23:00.551 "mp_policy": "active_passive" 00:23:00.551 } 00:23:00.551 } 00:23:00.551 ]' 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:00.551 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=cc063a3c-82bb-4bcc-9717-7b68e724d6e1 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:00.808 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc063a3c-82bb-4bcc-9717-7b68e724d6e1 00:23:01.065 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:23:01.322 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2122fb04-8064-4cf5-b5c1-d1513cae5689 00:23:01.322 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2122fb04-8064-4cf5-b5c1-d1513cae5689 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a86ef136-f111-4475-983f-716b9620cc32 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a86ef136-f111-4475-983f-716b9620cc32 ]] 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a86ef136-f111-4475-983f-716b9620cc32 5120 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a86ef136-f111-4475-983f-716b9620cc32 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a86ef136-f111-4475-983f-716b9620cc32 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a86ef136-f111-4475-983f-716b9620cc32 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:01.580 09:35:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a86ef136-f111-4475-983f-716b9620cc32 00:23:01.837 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:01.837 { 00:23:01.837 "name": "a86ef136-f111-4475-983f-716b9620cc32", 00:23:01.837 "aliases": [ 00:23:01.837 "lvs/basen1p0" 00:23:01.837 ], 00:23:01.837 "product_name": "Logical Volume", 00:23:01.837 "block_size": 4096, 00:23:01.837 "num_blocks": 5242880, 00:23:01.837 "uuid": "a86ef136-f111-4475-983f-716b9620cc32", 00:23:01.837 "assigned_rate_limits": { 00:23:01.837 "rw_ios_per_sec": 0, 00:23:01.837 "rw_mbytes_per_sec": 0, 00:23:01.837 "r_mbytes_per_sec": 0, 00:23:01.837 "w_mbytes_per_sec": 0 00:23:01.837 }, 00:23:01.837 "claimed": false, 00:23:01.837 "zoned": false, 00:23:01.837 "supported_io_types": { 00:23:01.837 "read": true, 00:23:01.837 "write": true, 00:23:01.837 "unmap": true, 00:23:01.837 "flush": false, 00:23:01.837 "reset": true, 00:23:01.837 "nvme_admin": false, 00:23:01.837 "nvme_io": false, 00:23:01.837 "nvme_io_md": false, 00:23:01.837 "write_zeroes": true, 00:23:01.837 "zcopy": false, 00:23:01.837 "get_zone_info": false, 00:23:01.837 "zone_management": false, 00:23:01.837 "zone_append": false, 00:23:01.837 "compare": false, 00:23:01.837 "compare_and_write": false, 00:23:01.837 "abort": false, 00:23:01.837 "seek_hole": true, 00:23:01.837 "seek_data": true, 00:23:01.837 "copy": false, 00:23:01.837 "nvme_iov_md": false 00:23:01.837 }, 00:23:01.837 "driver_specific": { 00:23:01.837 "lvol": { 00:23:01.837 "lvol_store_uuid": "2122fb04-8064-4cf5-b5c1-d1513cae5689", 00:23:01.837 "base_bdev": "basen1", 00:23:01.837 "thin_provision": true, 00:23:01.837 "num_allocated_clusters": 0, 00:23:01.837 "snapshot": false, 00:23:01.837 "clone": false, 00:23:01.837 "esnap_clone": false 00:23:01.837 } 00:23:01.837 } 00:23:01.837 } 00:23:01.838 ]' 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:01.838 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:23:02.095 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:23:02.095 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:23:02.095 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:23:02.353 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:23:02.353 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:23:02.353 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a86ef136-f111-4475-983f-716b9620cc32 -c cachen1p0 --l2p_dram_limit 2 00:23:02.353 [2024-11-20 09:35:27.744312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.353 [2024-11-20 09:35:27.744365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:02.353 [2024-11-20 09:35:27.744381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:02.353 [2024-11-20 09:35:27.744390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.353 [2024-11-20 09:35:27.744446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.353 [2024-11-20 09:35:27.744455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:02.353 [2024-11-20 09:35:27.744466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:23:02.353 [2024-11-20 09:35:27.744474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.353 [2024-11-20 09:35:27.744494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:02.353 [2024-11-20 09:35:27.745188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:02.353 [2024-11-20 09:35:27.745207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.353 [2024-11-20 09:35:27.745215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:02.353 [2024-11-20 09:35:27.745225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.715 ms 00:23:02.353 [2024-11-20 09:35:27.745232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.353 [2024-11-20 09:35:27.745316] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 82095c67-4430-4aad-a76e-942d979bfff9 00:23:02.353 [2024-11-20 09:35:27.746442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.353 [2024-11-20 09:35:27.746593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:23:02.353 [2024-11-20 09:35:27.746614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:02.353 [2024-11-20 09:35:27.746625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.353 [2024-11-20 09:35:27.751667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.353 [2024-11-20 09:35:27.751700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:02.353 [2024-11-20 09:35:27.751712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.995 ms 00:23:02.353 [2024-11-20 09:35:27.751722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.353 [2024-11-20 09:35:27.751759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.353 [2024-11-20 09:35:27.751769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:02.354 [2024-11-20 09:35:27.751777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:23:02.354 [2024-11-20 09:35:27.751787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.354 [2024-11-20 09:35:27.751823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.354 [2024-11-20 09:35:27.751834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:02.354 [2024-11-20 09:35:27.751842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:02.354 [2024-11-20 09:35:27.751855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.354 [2024-11-20 09:35:27.751876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:02.354 [2024-11-20 09:35:27.755387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.354 [2024-11-20 09:35:27.755418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:02.354 [2024-11-20 09:35:27.755431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.513 ms 00:23:02.354 [2024-11-20 09:35:27.755438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.354 [2024-11-20 09:35:27.755464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.354 [2024-11-20 09:35:27.755473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:02.354 [2024-11-20 09:35:27.755482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:02.354 [2024-11-20 09:35:27.755490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.354 [2024-11-20 09:35:27.755523] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:23:02.354 [2024-11-20 09:35:27.755659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:02.354 [2024-11-20 09:35:27.755674] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:02.354 [2024-11-20 09:35:27.755685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:02.354 [2024-11-20 09:35:27.755696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:02.354 [2024-11-20 09:35:27.755704] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:02.354 [2024-11-20 09:35:27.755714] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:02.354 [2024-11-20 09:35:27.755721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:02.354 [2024-11-20 09:35:27.755732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:02.354 [2024-11-20 09:35:27.755739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:02.354 [2024-11-20 09:35:27.755748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.354 [2024-11-20 09:35:27.755755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:02.354 [2024-11-20 09:35:27.755764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:23:02.354 [2024-11-20 09:35:27.755771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.354 [2024-11-20 09:35:27.755855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.354 [2024-11-20 09:35:27.755864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:02.354 [2024-11-20 09:35:27.755874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:23:02.354 [2024-11-20 09:35:27.755886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.354 [2024-11-20 09:35:27.756000] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:02.354 [2024-11-20 09:35:27.756010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:02.354 [2024-11-20 09:35:27.756019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:02.354 [2024-11-20 09:35:27.756042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:02.354 [2024-11-20 09:35:27.756057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:02.354 [2024-11-20 09:35:27.756065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:02.354 [2024-11-20 09:35:27.756072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:02.354 [2024-11-20 09:35:27.756087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:02.354 [2024-11-20 09:35:27.756095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:02.354 [2024-11-20 09:35:27.756110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:02.354 [2024-11-20 09:35:27.756116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:02.354 [2024-11-20 09:35:27.756133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:02.354 [2024-11-20 09:35:27.756143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:02.354 [2024-11-20 09:35:27.756158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:02.354 [2024-11-20 09:35:27.756164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:02.354 [2024-11-20 09:35:27.756178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:02.354 [2024-11-20 09:35:27.756186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:02.354 [2024-11-20 09:35:27.756200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:02.354 [2024-11-20 09:35:27.756207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:02.354 [2024-11-20 09:35:27.756221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:02.354 [2024-11-20 09:35:27.756228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:02.354 [2024-11-20 09:35:27.756244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:02.354 [2024-11-20 09:35:27.756251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:02.354 [2024-11-20 09:35:27.756265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:02.354 [2024-11-20 09:35:27.756287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:02.354 [2024-11-20 09:35:27.756333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:02.354 [2024-11-20 09:35:27.756342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756348] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:02.354 [2024-11-20 09:35:27.756361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:02.354 [2024-11-20 09:35:27.756372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:02.354 [2024-11-20 09:35:27.756399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:02.354 [2024-11-20 09:35:27.756416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:02.354 [2024-11-20 09:35:27.756428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:02.354 [2024-11-20 09:35:27.756440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:02.354 [2024-11-20 09:35:27.756449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:02.354 [2024-11-20 09:35:27.756462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:02.354 [2024-11-20 09:35:27.756477] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:02.354 [2024-11-20 09:35:27.756495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:02.354 [2024-11-20 09:35:27.756526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:02.354 [2024-11-20 09:35:27.756561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:02.354 [2024-11-20 09:35:27.756576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:02.354 [2024-11-20 09:35:27.756584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:02.354 [2024-11-20 09:35:27.756597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:02.354 [2024-11-20 09:35:27.756663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:02.355 [2024-11-20 09:35:27.756680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:02.355 [2024-11-20 09:35:27.756692] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:02.355 [2024-11-20 09:35:27.756707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.355 [2024-11-20 09:35:27.756720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.355 [2024-11-20 09:35:27.756734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:02.355 [2024-11-20 09:35:27.756746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:02.355 [2024-11-20 09:35:27.756760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:02.355 [2024-11-20 09:35:27.756774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.355 [2024-11-20 09:35:27.756795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:02.355 [2024-11-20 09:35:27.756808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:23:02.355 [2024-11-20 09:35:27.756822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.355 [2024-11-20 09:35:27.756882] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:02.355 [2024-11-20 09:35:27.756911] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:04.877 [2024-11-20 09:35:29.911498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.911550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:04.877 [2024-11-20 09:35:29.911566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2154.607 ms 00:23:04.877 [2024-11-20 09:35:29.911576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.936623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.936672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:04.877 [2024-11-20 09:35:29.936685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.852 ms 00:23:04.877 [2024-11-20 09:35:29.936695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.936778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.936790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:04.877 [2024-11-20 09:35:29.936798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:04.877 [2024-11-20 09:35:29.936810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.966925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.966965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:04.877 [2024-11-20 09:35:29.966976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.064 ms 00:23:04.877 [2024-11-20 09:35:29.966985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.967019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.967032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:04.877 [2024-11-20 09:35:29.967040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:04.877 [2024-11-20 09:35:29.967048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.967412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.967431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:04.877 [2024-11-20 09:35:29.967440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:23:04.877 [2024-11-20 09:35:29.967449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.967494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.967504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:04.877 [2024-11-20 09:35:29.967513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:04.877 [2024-11-20 09:35:29.967524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.981245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.981282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:04.877 [2024-11-20 09:35:29.981291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.703 ms 00:23:04.877 [2024-11-20 09:35:29.981318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:29.992481] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:04.877 [2024-11-20 09:35:29.993273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:29.993322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:04.877 [2024-11-20 09:35:29.993335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.880 ms 00:23:04.877 [2024-11-20 09:35:29.993343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.025084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.025135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:23:04.877 [2024-11-20 09:35:30.025152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.711 ms 00:23:04.877 [2024-11-20 09:35:30.025160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.025251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.025265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:04.877 [2024-11-20 09:35:30.025278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:23:04.877 [2024-11-20 09:35:30.025285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.047893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.047931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:23:04.877 [2024-11-20 09:35:30.047944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.521 ms 00:23:04.877 [2024-11-20 09:35:30.047952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.070200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.070246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:23:04.877 [2024-11-20 09:35:30.070260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.202 ms 00:23:04.877 [2024-11-20 09:35:30.070267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.070867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.070889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:04.877 [2024-11-20 09:35:30.070900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:23:04.877 [2024-11-20 09:35:30.070907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.137807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.137855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:23:04.877 [2024-11-20 09:35:30.137874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.852 ms 00:23:04.877 [2024-11-20 09:35:30.137882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.162150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.162201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:23:04.877 [2024-11-20 09:35:30.162223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.194 ms 00:23:04.877 [2024-11-20 09:35:30.162230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.185286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.185327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:23:04.877 [2024-11-20 09:35:30.185340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.013 ms 00:23:04.877 [2024-11-20 09:35:30.185347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.877 [2024-11-20 09:35:30.208142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.877 [2024-11-20 09:35:30.208286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:04.877 [2024-11-20 09:35:30.208320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.758 ms 00:23:04.878 [2024-11-20 09:35:30.208328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.878 [2024-11-20 09:35:30.208368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.878 [2024-11-20 09:35:30.208378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:04.878 [2024-11-20 09:35:30.208391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:04.878 [2024-11-20 09:35:30.208398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.878 [2024-11-20 09:35:30.208478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.878 [2024-11-20 09:35:30.208488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:04.878 [2024-11-20 09:35:30.208499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:23:04.878 [2024-11-20 09:35:30.208506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.878 [2024-11-20 09:35:30.209440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2464.733 ms, result 0 00:23:04.878 { 00:23:04.878 "name": "ftl", 00:23:04.878 "uuid": "82095c67-4430-4aad-a76e-942d979bfff9" 00:23:04.878 } 00:23:04.878 09:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:23:05.135 [2024-11-20 09:35:30.364895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.135 09:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:23:05.135 09:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:23:05.392 [2024-11-20 09:35:30.729099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:05.392 09:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:23:05.649 [2024-11-20 09:35:30.933536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:05.649 09:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:05.906 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:23:05.907 Fill FTL, iteration 1 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77533 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77533 /var/tmp/spdk.tgt.sock 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77533 ']' 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:23:05.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.907 09:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.907 [2024-11-20 09:35:31.351566] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:05.907 [2024-11-20 09:35:31.351806] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77533 ] 00:23:06.164 [2024-11-20 09:35:31.496868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.164 [2024-11-20 09:35:31.596111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.728 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.728 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:06.728 09:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:23:06.986 ftln1 00:23:06.986 09:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:23:06.986 09:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77533 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77533 ']' 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 77533 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77533 00:23:07.243 killing process with pid 77533 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77533' 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 77533 00:23:07.243 09:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 77533 00:23:09.140 09:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:23:09.140 09:35:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:09.140 [2024-11-20 09:35:34.137583] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:09.140 [2024-11-20 09:35:34.137675] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:23:09.140 [2024-11-20 09:35:34.293286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.140 [2024-11-20 09:35:34.390136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.519  [2024-11-20T09:35:36.907Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-20T09:35:37.840Z] Copying: 495/1024 [MB] (255 MBps) [2024-11-20T09:35:38.772Z] Copying: 771/1024 [MB] (276 MBps) [2024-11-20T09:35:39.337Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:23:13.881 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:23:13.881 Calculate MD5 checksum, iteration 1 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:13.881 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:14.138 [2024-11-20 09:35:39.342414] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:14.139 [2024-11-20 09:35:39.342703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77636 ] 00:23:14.139 [2024-11-20 09:35:39.497018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.139 [2024-11-20 09:35:39.581616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.510  [2024-11-20T09:35:41.531Z] Copying: 722/1024 [MB] (722 MBps) [2024-11-20T09:35:42.095Z] Copying: 1024/1024 [MB] (average 709 MBps) 00:23:16.639 00:23:16.639 09:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:23:16.639 09:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:23:19.161 Fill FTL, iteration 2 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e2fd1e5d7b1369304721f593554b8b9b 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:19.161 09:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:23:19.161 [2024-11-20 09:35:44.029141] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:19.162 [2024-11-20 09:35:44.029259] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77686 ] 00:23:19.162 [2024-11-20 09:35:44.186308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.162 [2024-11-20 09:35:44.283342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.534  [2024-11-20T09:35:46.924Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-20T09:35:47.857Z] Copying: 394/1024 [MB] (182 MBps) [2024-11-20T09:35:48.797Z] Copying: 579/1024 [MB] (185 MBps) [2024-11-20T09:35:49.733Z] Copying: 768/1024 [MB] (189 MBps) [2024-11-20T09:35:50.297Z] Copying: 941/1024 [MB] (173 MBps) [2024-11-20T09:35:50.863Z] Copying: 1024/1024 [MB] (average 189 MBps) 00:23:25.407 00:23:25.407 Calculate MD5 checksum, iteration 2 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:25.407 09:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:25.407 [2024-11-20 09:35:50.685544] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:25.407 [2024-11-20 09:35:50.685858] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77761 ] 00:23:25.407 [2024-11-20 09:35:50.848958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.665 [2024-11-20 09:35:50.955798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.561  [2024-11-20T09:35:53.583Z] Copying: 428/1024 [MB] (428 MBps) [2024-11-20T09:35:53.840Z] Copying: 827/1024 [MB] (399 MBps) [2024-11-20T09:35:55.210Z] Copying: 1024/1024 [MB] (average 441 MBps) 00:23:29.754 00:23:29.754 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:23:29.754 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:31.651 09:35:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:23:31.651 09:35:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b6c2c64cde5af66220394f89fd8e5b94 00:23:31.651 09:35:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:23:31.651 09:35:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:31.651 09:35:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:31.909 [2024-11-20 09:35:57.171019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.909 [2024-11-20 09:35:57.171073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:31.909 [2024-11-20 09:35:57.171087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:31.909 [2024-11-20 09:35:57.171095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.909 [2024-11-20 09:35:57.171121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.909 [2024-11-20 09:35:57.171129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:31.909 [2024-11-20 09:35:57.171138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:31.909 [2024-11-20 09:35:57.171149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.909 [2024-11-20 09:35:57.171168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:31.909 [2024-11-20 09:35:57.171176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:31.909 [2024-11-20 09:35:57.171184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:31.909 [2024-11-20 09:35:57.171192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:31.909 [2024-11-20 09:35:57.171251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:23:31.909 true 00:23:31.909 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:32.167 { 00:23:32.167 "name": "ftl", 00:23:32.167 "properties": [ 00:23:32.167 { 00:23:32.167 "name": "superblock_version", 00:23:32.167 "value": 5, 00:23:32.167 "read-only": true 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "name": "base_device", 00:23:32.167 "bands": [ 00:23:32.167 { 00:23:32.167 "id": 0, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 1, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 2, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 3, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 4, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 5, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 6, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 7, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 8, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 9, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 10, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 11, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 12, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 13, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 14, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 15, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 16, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 17, 00:23:32.167 "state": "FREE", 00:23:32.167 "validity": 0.0 00:23:32.167 } 00:23:32.167 ], 00:23:32.167 "read-only": true 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "name": "cache_device", 00:23:32.167 "type": "bdev", 00:23:32.167 "chunks": [ 00:23:32.167 { 00:23:32.167 "id": 0, 00:23:32.167 "state": "INACTIVE", 00:23:32.167 "utilization": 0.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 1, 00:23:32.167 "state": "CLOSED", 00:23:32.167 "utilization": 1.0 00:23:32.167 }, 00:23:32.167 { 00:23:32.167 "id": 2, 00:23:32.168 "state": "CLOSED", 00:23:32.168 "utilization": 1.0 00:23:32.168 }, 00:23:32.168 { 00:23:32.168 "id": 3, 00:23:32.168 "state": "OPEN", 00:23:32.168 "utilization": 0.001953125 00:23:32.168 }, 00:23:32.168 { 00:23:32.168 "id": 4, 00:23:32.168 "state": "OPEN", 00:23:32.168 "utilization": 0.0 00:23:32.168 } 00:23:32.168 ], 00:23:32.168 "read-only": true 00:23:32.168 }, 00:23:32.168 { 00:23:32.168 "name": "verbose_mode", 00:23:32.168 "value": true, 00:23:32.168 "unit": "", 00:23:32.168 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:32.168 }, 00:23:32.168 { 00:23:32.168 "name": "prep_upgrade_on_shutdown", 00:23:32.168 "value": false, 00:23:32.168 "unit": "", 00:23:32.168 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:32.168 } 00:23:32.168 ] 00:23:32.168 } 00:23:32.168 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:23:32.168 [2024-11-20 09:35:57.583512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:32.168 [2024-11-20 09:35:57.583703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:32.168 [2024-11-20 09:35:57.583767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:32.168 [2024-11-20 09:35:57.583791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:32.168 [2024-11-20 09:35:57.583832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:32.168 [2024-11-20 09:35:57.583854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:32.168 [2024-11-20 09:35:57.583873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:32.168 [2024-11-20 09:35:57.583891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:32.168 [2024-11-20 09:35:57.583921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:32.168 [2024-11-20 09:35:57.583941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:32.168 [2024-11-20 09:35:57.583961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:32.168 [2024-11-20 09:35:57.584013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:32.168 [2024-11-20 09:35:57.584088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.563 ms, result 0 00:23:32.168 true 00:23:32.168 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:23:32.168 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:32.168 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:32.426 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:23:32.426 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:23:32.426 09:35:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:32.683 [2024-11-20 09:35:58.005926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:32.683 [2024-11-20 09:35:58.005979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:32.683 [2024-11-20 09:35:58.005992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:32.683 [2024-11-20 09:35:58.006000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:32.683 [2024-11-20 09:35:58.006023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:32.683 [2024-11-20 09:35:58.006032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:32.683 [2024-11-20 09:35:58.006040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:32.683 [2024-11-20 09:35:58.006047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:32.683 [2024-11-20 09:35:58.006066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:32.683 [2024-11-20 09:35:58.006073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:32.683 [2024-11-20 09:35:58.006080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:32.683 [2024-11-20 09:35:58.006088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:32.683 [2024-11-20 09:35:58.006142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.208 ms, result 0 00:23:32.683 true 00:23:32.683 09:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:32.941 { 00:23:32.942 "name": "ftl", 00:23:32.942 "properties": [ 00:23:32.942 { 00:23:32.942 "name": "superblock_version", 00:23:32.942 "value": 5, 00:23:32.942 "read-only": true 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "name": "base_device", 00:23:32.942 "bands": [ 00:23:32.942 { 00:23:32.942 "id": 0, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 1, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 2, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 3, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 4, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 5, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 6, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 7, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 8, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 9, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 10, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 11, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 12, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 13, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 14, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 15, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 16, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 17, 00:23:32.942 "state": "FREE", 00:23:32.942 "validity": 0.0 00:23:32.942 } 00:23:32.942 ], 00:23:32.942 "read-only": true 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "name": "cache_device", 00:23:32.942 "type": "bdev", 00:23:32.942 "chunks": [ 00:23:32.942 { 00:23:32.942 "id": 0, 00:23:32.942 "state": "INACTIVE", 00:23:32.942 "utilization": 0.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 1, 00:23:32.942 "state": "CLOSED", 00:23:32.942 "utilization": 1.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 2, 00:23:32.942 "state": "CLOSED", 00:23:32.942 "utilization": 1.0 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 3, 00:23:32.942 "state": "OPEN", 00:23:32.942 "utilization": 0.001953125 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "id": 4, 00:23:32.942 "state": "OPEN", 00:23:32.942 "utilization": 0.0 00:23:32.942 } 00:23:32.942 ], 00:23:32.942 "read-only": true 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "name": "verbose_mode", 00:23:32.942 "value": true, 00:23:32.942 "unit": "", 00:23:32.942 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:32.942 }, 00:23:32.942 { 00:23:32.942 "name": "prep_upgrade_on_shutdown", 00:23:32.942 "value": true, 00:23:32.942 "unit": "", 00:23:32.942 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:32.942 } 00:23:32.942 ] 00:23:32.942 } 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77422 ]] 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77422 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77422 ']' 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 77422 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77422 00:23:32.942 killing process with pid 77422 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77422' 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 77422 00:23:32.942 09:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 77422 00:23:33.874 [2024-11-20 09:35:58.966204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:33.874 [2024-11-20 09:35:58.980675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.874 [2024-11-20 09:35:58.980722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:33.874 [2024-11-20 09:35:58.980734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:33.874 [2024-11-20 09:35:58.980743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.874 [2024-11-20 09:35:58.980764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:33.874 [2024-11-20 09:35:58.983390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.874 [2024-11-20 09:35:58.983417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:33.874 [2024-11-20 09:35:58.983429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.612 ms 00:23:33.874 [2024-11-20 09:35:58.983437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.760136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.760314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:43.878 [2024-11-20 09:36:07.760335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8776.650 ms 00:23:43.878 [2024-11-20 09:36:07.760349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.762268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.762296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:43.878 [2024-11-20 09:36:07.762317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.900 ms 00:23:43.878 [2024-11-20 09:36:07.762325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.763451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.763471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:43.878 [2024-11-20 09:36:07.763480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:23:43.878 [2024-11-20 09:36:07.763489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.773715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.773745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:43.878 [2024-11-20 09:36:07.773755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.186 ms 00:23:43.878 [2024-11-20 09:36:07.773762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.780795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.780828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:43.878 [2024-11-20 09:36:07.780838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.001 ms 00:23:43.878 [2024-11-20 09:36:07.780846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.780926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.780936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:43.878 [2024-11-20 09:36:07.780950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:23:43.878 [2024-11-20 09:36:07.780957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.790997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.791116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:43.878 [2024-11-20 09:36:07.791131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.024 ms 00:23:43.878 [2024-11-20 09:36:07.791138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.801586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.801689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:43.878 [2024-11-20 09:36:07.801704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.420 ms 00:23:43.878 [2024-11-20 09:36:07.801711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.811416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.811446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:43.878 [2024-11-20 09:36:07.811456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.677 ms 00:23:43.878 [2024-11-20 09:36:07.811464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.822328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.878 [2024-11-20 09:36:07.822368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:43.878 [2024-11-20 09:36:07.822380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.802 ms 00:23:43.878 [2024-11-20 09:36:07.822388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.878 [2024-11-20 09:36:07.822422] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:43.878 [2024-11-20 09:36:07.822436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:43.878 [2024-11-20 09:36:07.822446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:43.878 [2024-11-20 09:36:07.822463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:43.878 [2024-11-20 09:36:07.822471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:43.878 [2024-11-20 09:36:07.822600] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:43.878 [2024-11-20 09:36:07.822608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 82095c67-4430-4aad-a76e-942d979bfff9 00:23:43.878 [2024-11-20 09:36:07.822616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:43.878 [2024-11-20 09:36:07.822623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:23:43.878 [2024-11-20 09:36:07.822630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:23:43.878 [2024-11-20 09:36:07.822638] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:23:43.878 [2024-11-20 09:36:07.822644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:43.878 [2024-11-20 09:36:07.822655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:43.878 [2024-11-20 09:36:07.822662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:43.879 [2024-11-20 09:36:07.822669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:43.879 [2024-11-20 09:36:07.822676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:43.879 [2024-11-20 09:36:07.822684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.879 [2024-11-20 09:36:07.822694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:43.879 [2024-11-20 09:36:07.822702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:23:43.879 [2024-11-20 09:36:07.822709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.835470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.879 [2024-11-20 09:36:07.835502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:43.879 [2024-11-20 09:36:07.835511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.743 ms 00:23:43.879 [2024-11-20 09:36:07.835523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.835860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:43.879 [2024-11-20 09:36:07.835869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:43.879 [2024-11-20 09:36:07.835877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:23:43.879 [2024-11-20 09:36:07.835884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.877321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:07.877358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:43.879 [2024-11-20 09:36:07.877373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:07.877382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.877410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:07.877418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:43.879 [2024-11-20 09:36:07.877426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:07.877435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.877498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:07.877509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:43.879 [2024-11-20 09:36:07.877517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:07.877524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.877543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:07.877550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:43.879 [2024-11-20 09:36:07.877558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:07.877565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:07.954665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:07.954718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:43.879 [2024-11-20 09:36:07.954732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:07.954745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.016888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:43.879 [2024-11-20 09:36:08.017096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:43.879 [2024-11-20 09:36:08.017211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:43.879 [2024-11-20 09:36:08.017282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:43.879 [2024-11-20 09:36:08.017422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:43.879 [2024-11-20 09:36:08.017478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:43.879 [2024-11-20 09:36:08.017535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:43.879 [2024-11-20 09:36:08.017594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:43.879 [2024-11-20 09:36:08.017601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:43.879 [2024-11-20 09:36:08.017608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:43.879 [2024-11-20 09:36:08.017718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9036.997 ms, result 0 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77975 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77975 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77975 ']' 00:23:43.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.879 09:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.879 [2024-11-20 09:36:09.140466] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:43.879 [2024-11-20 09:36:09.140586] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77975 ] 00:23:43.879 [2024-11-20 09:36:09.297544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.146 [2024-11-20 09:36:09.396828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.709 [2024-11-20 09:36:10.082988] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:44.709 [2024-11-20 09:36:10.083053] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:44.967 [2024-11-20 09:36:10.227369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.967 [2024-11-20 09:36:10.227419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:44.968 [2024-11-20 09:36:10.227431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:44.968 [2024-11-20 09:36:10.227439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.227489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.227499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:44.968 [2024-11-20 09:36:10.227506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:23:44.968 [2024-11-20 09:36:10.227513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.227534] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:44.968 [2024-11-20 09:36:10.228178] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:44.968 [2024-11-20 09:36:10.228193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.228201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:44.968 [2024-11-20 09:36:10.228209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:23:44.968 [2024-11-20 09:36:10.228216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.229327] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:44.968 [2024-11-20 09:36:10.241598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.241629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:44.968 [2024-11-20 09:36:10.241644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.272 ms 00:23:44.968 [2024-11-20 09:36:10.241651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.241701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.241711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:44.968 [2024-11-20 09:36:10.241719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:44.968 [2024-11-20 09:36:10.241726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.246435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.246595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:44.968 [2024-11-20 09:36:10.246611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.649 ms 00:23:44.968 [2024-11-20 09:36:10.246618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.246674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.246683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:44.968 [2024-11-20 09:36:10.246691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:23:44.968 [2024-11-20 09:36:10.246699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.246740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.246750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:44.968 [2024-11-20 09:36:10.246760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:44.968 [2024-11-20 09:36:10.246767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.246786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:44.968 [2024-11-20 09:36:10.249918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.250030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:44.968 [2024-11-20 09:36:10.250044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.136 ms 00:23:44.968 [2024-11-20 09:36:10.250055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.250083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.250092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:44.968 [2024-11-20 09:36:10.250099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:44.968 [2024-11-20 09:36:10.250106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.250125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:44.968 [2024-11-20 09:36:10.250143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:44.968 [2024-11-20 09:36:10.250178] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:44.968 [2024-11-20 09:36:10.250192] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:44.968 [2024-11-20 09:36:10.250294] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:44.968 [2024-11-20 09:36:10.250323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:44.968 [2024-11-20 09:36:10.250333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:44.968 [2024-11-20 09:36:10.250343] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:44.968 [2024-11-20 09:36:10.250351] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:44.968 [2024-11-20 09:36:10.250362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:44.968 [2024-11-20 09:36:10.250369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:44.968 [2024-11-20 09:36:10.250376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:44.968 [2024-11-20 09:36:10.250383] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:44.968 [2024-11-20 09:36:10.250390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.250397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:44.968 [2024-11-20 09:36:10.250405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:23:44.968 [2024-11-20 09:36:10.250412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.250497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.968 [2024-11-20 09:36:10.250504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:44.968 [2024-11-20 09:36:10.250512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:23:44.968 [2024-11-20 09:36:10.250521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.968 [2024-11-20 09:36:10.250628] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:44.968 [2024-11-20 09:36:10.250638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:44.968 [2024-11-20 09:36:10.250646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:44.968 [2024-11-20 09:36:10.250653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.968 [2024-11-20 09:36:10.250661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:44.968 [2024-11-20 09:36:10.250668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:44.968 [2024-11-20 09:36:10.250675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:44.968 [2024-11-20 09:36:10.250681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:44.968 [2024-11-20 09:36:10.250688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:44.969 [2024-11-20 09:36:10.250695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:44.969 [2024-11-20 09:36:10.250708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:44.969 [2024-11-20 09:36:10.250715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:44.969 [2024-11-20 09:36:10.250727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:44.969 [2024-11-20 09:36:10.250733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:44.969 [2024-11-20 09:36:10.250747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:44.969 [2024-11-20 09:36:10.250753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:44.969 [2024-11-20 09:36:10.250766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:44.969 [2024-11-20 09:36:10.250773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:44.969 [2024-11-20 09:36:10.250785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:44.969 [2024-11-20 09:36:10.250791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:44.969 [2024-11-20 09:36:10.250809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:44.969 [2024-11-20 09:36:10.250816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:44.969 [2024-11-20 09:36:10.250828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:44.969 [2024-11-20 09:36:10.250834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:44.969 [2024-11-20 09:36:10.250846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:44.969 [2024-11-20 09:36:10.250852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:44.969 [2024-11-20 09:36:10.250865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:44.969 [2024-11-20 09:36:10.250883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:44.969 [2024-11-20 09:36:10.250902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:44.969 [2024-11-20 09:36:10.250908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250914] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:44.969 [2024-11-20 09:36:10.250922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:44.969 [2024-11-20 09:36:10.250929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:44.969 [2024-11-20 09:36:10.250944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:44.969 [2024-11-20 09:36:10.250953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:44.969 [2024-11-20 09:36:10.250959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:44.969 [2024-11-20 09:36:10.250966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:44.969 [2024-11-20 09:36:10.250972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:44.969 [2024-11-20 09:36:10.250978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:44.969 [2024-11-20 09:36:10.250986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:44.969 [2024-11-20 09:36:10.250994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:44.969 [2024-11-20 09:36:10.251010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:44.969 [2024-11-20 09:36:10.251030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:44.969 [2024-11-20 09:36:10.251037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:44.969 [2024-11-20 09:36:10.251044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:44.969 [2024-11-20 09:36:10.251050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:44.969 [2024-11-20 09:36:10.251098] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:44.969 [2024-11-20 09:36:10.251106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:44.969 [2024-11-20 09:36:10.251121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:44.970 [2024-11-20 09:36:10.251128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:44.970 [2024-11-20 09:36:10.251136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:44.970 [2024-11-20 09:36:10.251143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:44.970 [2024-11-20 09:36:10.251150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:44.970 [2024-11-20 09:36:10.251157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:23:44.970 [2024-11-20 09:36:10.251164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:44.970 [2024-11-20 09:36:10.251213] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:44.970 [2024-11-20 09:36:10.251224] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:48.246 [2024-11-20 09:36:13.040990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.041169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:48.246 [2024-11-20 09:36:13.041241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2789.768 ms 00:23:48.246 [2024-11-20 09:36:13.041266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.246 [2024-11-20 09:36:13.066109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.066251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:48.246 [2024-11-20 09:36:13.066327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.620 ms 00:23:48.246 [2024-11-20 09:36:13.066352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.246 [2024-11-20 09:36:13.066448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.066549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:48.246 [2024-11-20 09:36:13.066640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:48.246 [2024-11-20 09:36:13.066659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.246 [2024-11-20 09:36:13.096664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.096787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:48.246 [2024-11-20 09:36:13.096840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.953 ms 00:23:48.246 [2024-11-20 09:36:13.096866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.246 [2024-11-20 09:36:13.096914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.096936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:48.246 [2024-11-20 09:36:13.096956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:48.246 [2024-11-20 09:36:13.096975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.246 [2024-11-20 09:36:13.097335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.097372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:48.246 [2024-11-20 09:36:13.097578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:23:48.246 [2024-11-20 09:36:13.097601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.246 [2024-11-20 09:36:13.097661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.246 [2024-11-20 09:36:13.097683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:48.247 [2024-11-20 09:36:13.097824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:48.247 [2024-11-20 09:36:13.097847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.111736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.111836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:48.247 [2024-11-20 09:36:13.111882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.855 ms 00:23:48.247 [2024-11-20 09:36:13.111904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.124130] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:48.247 [2024-11-20 09:36:13.124255] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:48.247 [2024-11-20 09:36:13.124328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.124349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:23:48.247 [2024-11-20 09:36:13.124392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.314 ms 00:23:48.247 [2024-11-20 09:36:13.124414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.145026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.145133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:23:48.247 [2024-11-20 09:36:13.145184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.570 ms 00:23:48.247 [2024-11-20 09:36:13.145208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.156157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.156258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:23:48.247 [2024-11-20 09:36:13.156314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.875 ms 00:23:48.247 [2024-11-20 09:36:13.156337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.167374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.167471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:23:48.247 [2024-11-20 09:36:13.167516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.998 ms 00:23:48.247 [2024-11-20 09:36:13.167537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.168152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.168238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:48.247 [2024-11-20 09:36:13.168284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:23:48.247 [2024-11-20 09:36:13.168326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.242000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.242206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:48.247 [2024-11-20 09:36:13.242265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.638 ms 00:23:48.247 [2024-11-20 09:36:13.242289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.252495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:48.247 [2024-11-20 09:36:13.253310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.253400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:48.247 [2024-11-20 09:36:13.253450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.934 ms 00:23:48.247 [2024-11-20 09:36:13.253472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.253563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.253701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:23:48.247 [2024-11-20 09:36:13.253726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:48.247 [2024-11-20 09:36:13.253745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.253817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.253841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:48.247 [2024-11-20 09:36:13.253861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:48.247 [2024-11-20 09:36:13.253879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.253955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.253979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:48.247 [2024-11-20 09:36:13.253999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:48.247 [2024-11-20 09:36:13.254022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.254064] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:48.247 [2024-11-20 09:36:13.254087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.254106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:48.247 [2024-11-20 09:36:13.254170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:23:48.247 [2024-11-20 09:36:13.254188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.277216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.277361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:48.247 [2024-11-20 09:36:13.277415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.995 ms 00:23:48.247 [2024-11-20 09:36:13.277438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.277605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.277640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:48.247 [2024-11-20 09:36:13.277746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:23:48.247 [2024-11-20 09:36:13.277757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.278711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3050.898 ms, result 0 00:23:48.247 [2024-11-20 09:36:13.293823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.247 [2024-11-20 09:36:13.309811] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:48.247 [2024-11-20 09:36:13.317922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:48.247 09:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.247 09:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:48.247 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:48.247 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:48.247 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:48.247 [2024-11-20 09:36:13.558009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.558057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:48.247 [2024-11-20 09:36:13.558071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:48.247 [2024-11-20 09:36:13.558082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.558104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.558114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:48.247 [2024-11-20 09:36:13.558122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:48.247 [2024-11-20 09:36:13.558129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.558148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:48.247 [2024-11-20 09:36:13.558156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:48.247 [2024-11-20 09:36:13.558164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:48.247 [2024-11-20 09:36:13.558171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.247 [2024-11-20 09:36:13.558229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.209 ms, result 0 00:23:48.247 true 00:23:48.247 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:48.506 { 00:23:48.506 "name": "ftl", 00:23:48.506 "properties": [ 00:23:48.506 { 00:23:48.506 "name": "superblock_version", 00:23:48.506 "value": 5, 00:23:48.506 "read-only": true 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "name": "base_device", 00:23:48.506 "bands": [ 00:23:48.506 { 00:23:48.506 "id": 0, 00:23:48.506 "state": "CLOSED", 00:23:48.506 "validity": 1.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 1, 00:23:48.506 "state": "CLOSED", 00:23:48.506 "validity": 1.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 2, 00:23:48.506 "state": "CLOSED", 00:23:48.506 "validity": 0.007843137254901933 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 3, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 4, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 5, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 6, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 7, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 8, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 9, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 10, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 11, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 12, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 13, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 14, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 15, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 16, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 17, 00:23:48.506 "state": "FREE", 00:23:48.506 "validity": 0.0 00:23:48.506 } 00:23:48.506 ], 00:23:48.506 "read-only": true 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "name": "cache_device", 00:23:48.506 "type": "bdev", 00:23:48.506 "chunks": [ 00:23:48.506 { 00:23:48.506 "id": 0, 00:23:48.506 "state": "INACTIVE", 00:23:48.506 "utilization": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 1, 00:23:48.506 "state": "OPEN", 00:23:48.506 "utilization": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 2, 00:23:48.506 "state": "OPEN", 00:23:48.506 "utilization": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 3, 00:23:48.506 "state": "FREE", 00:23:48.506 "utilization": 0.0 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "id": 4, 00:23:48.506 "state": "FREE", 00:23:48.506 "utilization": 0.0 00:23:48.506 } 00:23:48.506 ], 00:23:48.506 "read-only": true 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "name": "verbose_mode", 00:23:48.506 "value": true, 00:23:48.506 "unit": "", 00:23:48.506 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:48.506 }, 00:23:48.506 { 00:23:48.506 "name": "prep_upgrade_on_shutdown", 00:23:48.506 "value": false, 00:23:48.506 "unit": "", 00:23:48.506 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:48.506 } 00:23:48.506 ] 00:23:48.506 } 00:23:48.506 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:23:48.506 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:48.506 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:48.763 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:23:48.763 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:23:48.763 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:23:48.763 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:23:48.763 09:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:23:48.763 Validate MD5 checksum, iteration 1 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:48.763 09:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:49.019 [2024-11-20 09:36:14.221947] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:49.019 [2024-11-20 09:36:14.222203] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78049 ] 00:23:49.019 [2024-11-20 09:36:14.380762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.276 [2024-11-20 09:36:14.483549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.649  [2024-11-20T09:36:16.671Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-20T09:36:18.042Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:23:52.586 00:23:52.586 09:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:52.586 09:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:54.493 Validate MD5 checksum, iteration 2 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e2fd1e5d7b1369304721f593554b8b9b 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e2fd1e5d7b1369304721f593554b8b9b != \e\2\f\d\1\e\5\d\7\b\1\3\6\9\3\0\4\7\2\1\f\5\9\3\5\5\4\b\8\b\9\b ]] 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:54.493 09:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:54.752 [2024-11-20 09:36:19.952486] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:23:54.752 [2024-11-20 09:36:19.952604] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78111 ] 00:23:54.752 [2024-11-20 09:36:20.113249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.009 [2024-11-20 09:36:20.209540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.383  [2024-11-20T09:36:22.405Z] Copying: 734/1024 [MB] (734 MBps) [2024-11-20T09:36:24.931Z] Copying: 1024/1024 [MB] (average 708 MBps) 00:23:59.475 00:23:59.733 09:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:59.733 09:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b6c2c64cde5af66220394f89fd8e5b94 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b6c2c64cde5af66220394f89fd8e5b94 != \b\6\c\2\c\6\4\c\d\e\5\a\f\6\6\2\2\0\3\9\4\f\8\9\f\d\8\e\5\b\9\4 ]] 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 77975 ]] 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 77975 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78194 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78194 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78194 ']' 00:24:01.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:01.692 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:01.950 [2024-11-20 09:36:27.157470] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:24:01.950 [2024-11-20 09:36:27.157590] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78194 ] 00:24:01.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 77975 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:24:01.950 [2024-11-20 09:36:27.310923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.950 [2024-11-20 09:36:27.392622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.883 [2024-11-20 09:36:27.968890] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:02.883 [2024-11-20 09:36:27.968943] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:02.883 [2024-11-20 09:36:28.112340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.112384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:02.883 [2024-11-20 09:36:28.112397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:02.883 [2024-11-20 09:36:28.112405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.112452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.112462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:02.883 [2024-11-20 09:36:28.112470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:24:02.883 [2024-11-20 09:36:28.112476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.112498] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:02.883 [2024-11-20 09:36:28.113208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:02.883 [2024-11-20 09:36:28.113222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.113230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:02.883 [2024-11-20 09:36:28.113238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.731 ms 00:24:02.883 [2024-11-20 09:36:28.113246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.113617] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:02.883 [2024-11-20 09:36:28.129324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.129360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:02.883 [2024-11-20 09:36:28.129374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.708 ms 00:24:02.883 [2024-11-20 09:36:28.129382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.138057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.138088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:02.883 [2024-11-20 09:36:28.138100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:24:02.883 [2024-11-20 09:36:28.138108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.138436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.138460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:02.883 [2024-11-20 09:36:28.138468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:24:02.883 [2024-11-20 09:36:28.138476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.138520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.138530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:02.883 [2024-11-20 09:36:28.138538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:24:02.883 [2024-11-20 09:36:28.138545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.138570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.138595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:02.883 [2024-11-20 09:36:28.138603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:02.883 [2024-11-20 09:36:28.138611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.138631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:02.883 [2024-11-20 09:36:28.141629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.141655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:02.883 [2024-11-20 09:36:28.141664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.004 ms 00:24:02.883 [2024-11-20 09:36:28.141672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.141700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.883 [2024-11-20 09:36:28.141709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:02.883 [2024-11-20 09:36:28.141717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:02.883 [2024-11-20 09:36:28.141724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.883 [2024-11-20 09:36:28.141743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:02.884 [2024-11-20 09:36:28.141758] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:02.884 [2024-11-20 09:36:28.141791] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:02.884 [2024-11-20 09:36:28.141807] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:24:02.884 [2024-11-20 09:36:28.141908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:02.884 [2024-11-20 09:36:28.141918] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:02.884 [2024-11-20 09:36:28.141927] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:24:02.884 [2024-11-20 09:36:28.141937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:02.884 [2024-11-20 09:36:28.141945] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:02.884 [2024-11-20 09:36:28.141953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:02.884 [2024-11-20 09:36:28.141960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:02.884 [2024-11-20 09:36:28.141967] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:02.884 [2024-11-20 09:36:28.141974] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:02.884 [2024-11-20 09:36:28.141981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.884 [2024-11-20 09:36:28.141990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:02.884 [2024-11-20 09:36:28.141998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:24:02.884 [2024-11-20 09:36:28.142005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.884 [2024-11-20 09:36:28.142089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.884 [2024-11-20 09:36:28.142097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:02.884 [2024-11-20 09:36:28.142103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:24:02.884 [2024-11-20 09:36:28.142110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.884 [2024-11-20 09:36:28.142223] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:02.884 [2024-11-20 09:36:28.142233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:02.884 [2024-11-20 09:36:28.142244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:02.884 [2024-11-20 09:36:28.142266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:02.884 [2024-11-20 09:36:28.142279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:02.884 [2024-11-20 09:36:28.142287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:02.884 [2024-11-20 09:36:28.142293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:02.884 [2024-11-20 09:36:28.142321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:02.884 [2024-11-20 09:36:28.142328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:02.884 [2024-11-20 09:36:28.142342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:02.884 [2024-11-20 09:36:28.142349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:02.884 [2024-11-20 09:36:28.142363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:02.884 [2024-11-20 09:36:28.142369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:02.884 [2024-11-20 09:36:28.142382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:02.884 [2024-11-20 09:36:28.142389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:02.884 [2024-11-20 09:36:28.142408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:02.884 [2024-11-20 09:36:28.142414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:02.884 [2024-11-20 09:36:28.142426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:02.884 [2024-11-20 09:36:28.142433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:02.884 [2024-11-20 09:36:28.142445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:02.884 [2024-11-20 09:36:28.142451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:02.884 [2024-11-20 09:36:28.142464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:02.884 [2024-11-20 09:36:28.142470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:02.884 [2024-11-20 09:36:28.142483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:02.884 [2024-11-20 09:36:28.142501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:02.884 [2024-11-20 09:36:28.142520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:02.884 [2024-11-20 09:36:28.142526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142533] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:02.884 [2024-11-20 09:36:28.142540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:02.884 [2024-11-20 09:36:28.142546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:02.884 [2024-11-20 09:36:28.142561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:02.884 [2024-11-20 09:36:28.142568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:02.884 [2024-11-20 09:36:28.142582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:02.884 [2024-11-20 09:36:28.142589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:02.884 [2024-11-20 09:36:28.142596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:02.884 [2024-11-20 09:36:28.142602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:02.884 [2024-11-20 09:36:28.142610] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:02.884 [2024-11-20 09:36:28.142619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:02.884 [2024-11-20 09:36:28.142634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:02.884 [2024-11-20 09:36:28.142655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:02.884 [2024-11-20 09:36:28.142662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:02.884 [2024-11-20 09:36:28.142669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:02.884 [2024-11-20 09:36:28.142676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:02.884 [2024-11-20 09:36:28.142711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:02.885 [2024-11-20 09:36:28.142719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:02.885 [2024-11-20 09:36:28.142726] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:02.885 [2024-11-20 09:36:28.142736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.885 [2024-11-20 09:36:28.142743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:02.885 [2024-11-20 09:36:28.142750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:02.885 [2024-11-20 09:36:28.142757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:02.885 [2024-11-20 09:36:28.142765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:02.885 [2024-11-20 09:36:28.142773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.142782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:02.885 [2024-11-20 09:36:28.142790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:24:02.885 [2024-11-20 09:36:28.142796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.166405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.166439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:02.885 [2024-11-20 09:36:28.166449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.559 ms 00:24:02.885 [2024-11-20 09:36:28.166457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.166495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.166502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:02.885 [2024-11-20 09:36:28.166510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:02.885 [2024-11-20 09:36:28.166518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.196730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.196867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:02.885 [2024-11-20 09:36:28.196883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.163 ms 00:24:02.885 [2024-11-20 09:36:28.196891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.196922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.196931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:02.885 [2024-11-20 09:36:28.196939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:02.885 [2024-11-20 09:36:28.196946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.197044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.197055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:02.885 [2024-11-20 09:36:28.197063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:24:02.885 [2024-11-20 09:36:28.197071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.197108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.197115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:02.885 [2024-11-20 09:36:28.197123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:24:02.885 [2024-11-20 09:36:28.197130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.211030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.211060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:02.885 [2024-11-20 09:36:28.211070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.881 ms 00:24:02.885 [2024-11-20 09:36:28.211077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.211186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.211197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:24:02.885 [2024-11-20 09:36:28.211205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:02.885 [2024-11-20 09:36:28.211212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.236490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.236529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:24:02.885 [2024-11-20 09:36:28.236542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.259 ms 00:24:02.885 [2024-11-20 09:36:28.236549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.245856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.245979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:02.885 [2024-11-20 09:36:28.246001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:24:02.885 [2024-11-20 09:36:28.246009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.298961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.299012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:02.885 [2024-11-20 09:36:28.299029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.897 ms 00:24:02.885 [2024-11-20 09:36:28.299037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.299165] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:24:02.885 [2024-11-20 09:36:28.299255] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:24:02.885 [2024-11-20 09:36:28.299364] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:24:02.885 [2024-11-20 09:36:28.299449] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:24:02.885 [2024-11-20 09:36:28.299458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.299466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:24:02.885 [2024-11-20 09:36:28.299474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:24:02.885 [2024-11-20 09:36:28.299481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.299536] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:24:02.885 [2024-11-20 09:36:28.299548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.299559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:24:02.885 [2024-11-20 09:36:28.299567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:02.885 [2024-11-20 09:36:28.299574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.313685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.313721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:24:02.885 [2024-11-20 09:36:28.313733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.090 ms 00:24:02.885 [2024-11-20 09:36:28.313740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.322206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.322236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:24:02.885 [2024-11-20 09:36:28.322247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:24:02.885 [2024-11-20 09:36:28.322255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:02.885 [2024-11-20 09:36:28.322364] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:24:02.885 [2024-11-20 09:36:28.322487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:02.885 [2024-11-20 09:36:28.322500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:24:02.885 [2024-11-20 09:36:28.322509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.125 ms 00:24:02.885 [2024-11-20 09:36:28.322516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:03.449 [2024-11-20 09:36:28.746100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:03.449 [2024-11-20 09:36:28.746158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:24:03.449 [2024-11-20 09:36:28.746171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 422.724 ms 00:24:03.449 [2024-11-20 09:36:28.746178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:03.449 [2024-11-20 09:36:28.749494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:03.449 [2024-11-20 09:36:28.749523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:24:03.449 [2024-11-20 09:36:28.749532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.788 ms 00:24:03.449 [2024-11-20 09:36:28.749539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:03.449 [2024-11-20 09:36:28.749846] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:24:03.449 [2024-11-20 09:36:28.749875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:03.449 [2024-11-20 09:36:28.749882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:24:03.449 [2024-11-20 09:36:28.749889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:24:03.449 [2024-11-20 09:36:28.749895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:03.449 [2024-11-20 09:36:28.749919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:03.449 [2024-11-20 09:36:28.749927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:24:03.449 [2024-11-20 09:36:28.749934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:03.449 [2024-11-20 09:36:28.749940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:03.449 [2024-11-20 09:36:28.749971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 427.609 ms, result 0 00:24:03.449 [2024-11-20 09:36:28.750002] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:24:03.449 [2024-11-20 09:36:28.750084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:03.449 [2024-11-20 09:36:28.750093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:24:03.449 [2024-11-20 09:36:28.750099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:24:03.449 [2024-11-20 09:36:28.750105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.174099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.174164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:24:04.016 [2024-11-20 09:36:29.174183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 423.171 ms 00:24:04.016 [2024-11-20 09:36:29.174195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.178365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.178401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:24:04.016 [2024-11-20 09:36:29.178417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.928 ms 00:24:04.016 [2024-11-20 09:36:29.178428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.178840] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:24:04.016 [2024-11-20 09:36:29.178874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.178886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:24:04.016 [2024-11-20 09:36:29.178899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:24:04.016 [2024-11-20 09:36:29.178911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.178954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.178968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:24:04.016 [2024-11-20 09:36:29.178981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:04.016 [2024-11-20 09:36:29.178992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.179042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 429.026 ms, result 0 00:24:04.016 [2024-11-20 09:36:29.179097] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:04.016 [2024-11-20 09:36:29.179113] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:04.016 [2024-11-20 09:36:29.179128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.179143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:24:04.016 [2024-11-20 09:36:29.179157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 856.781 ms 00:24:04.016 [2024-11-20 09:36:29.179169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.179215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.179230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:24:04.016 [2024-11-20 09:36:29.179248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:04.016 [2024-11-20 09:36:29.179259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.190026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:04.016 [2024-11-20 09:36:29.190160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.190183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:04.016 [2024-11-20 09:36:29.190197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.876 ms 00:24:04.016 [2024-11-20 09:36:29.190209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.190988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.191015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:24:04.016 [2024-11-20 09:36:29.191034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.690 ms 00:24:04.016 [2024-11-20 09:36:29.191046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.193383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.193412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:24:04.016 [2024-11-20 09:36:29.193425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.307 ms 00:24:04.016 [2024-11-20 09:36:29.193436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.193493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.193508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:24:04.016 [2024-11-20 09:36:29.193522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:04.016 [2024-11-20 09:36:29.193539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.193681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.193696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:04.016 [2024-11-20 09:36:29.193710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:04.016 [2024-11-20 09:36:29.193723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.016 [2024-11-20 09:36:29.193752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.016 [2024-11-20 09:36:29.193766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:04.016 [2024-11-20 09:36:29.193778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:04.016 [2024-11-20 09:36:29.193792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.017 [2024-11-20 09:36:29.193832] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:04.017 [2024-11-20 09:36:29.193850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.017 [2024-11-20 09:36:29.193862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:04.017 [2024-11-20 09:36:29.193875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:24:04.017 [2024-11-20 09:36:29.193888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.017 [2024-11-20 09:36:29.193959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:04.017 [2024-11-20 09:36:29.193974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:04.017 [2024-11-20 09:36:29.193987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:24:04.017 [2024-11-20 09:36:29.194000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:04.017 [2024-11-20 09:36:29.195197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1082.460 ms, result 0 00:24:04.017 [2024-11-20 09:36:29.210736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.017 [2024-11-20 09:36:29.226710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:04.017 [2024-11-20 09:36:29.234896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:04.275 Validate MD5 checksum, iteration 1 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:04.275 09:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:04.532 [2024-11-20 09:36:29.744749] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:24:04.532 [2024-11-20 09:36:29.745042] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78223 ] 00:24:04.532 [2024-11-20 09:36:29.902439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.789 [2024-11-20 09:36:30.001230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.162  [2024-11-20T09:36:32.265Z] Copying: 690/1024 [MB] (690 MBps) [2024-11-20T09:36:33.197Z] Copying: 1024/1024 [MB] (average 692 MBps) 00:24:07.741 00:24:07.741 09:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:07.741 09:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e2fd1e5d7b1369304721f593554b8b9b 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e2fd1e5d7b1369304721f593554b8b9b != \e\2\f\d\1\e\5\d\7\b\1\3\6\9\3\0\4\7\2\1\f\5\9\3\5\5\4\b\8\b\9\b ]] 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:09.672 Validate MD5 checksum, iteration 2 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:09.672 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:09.672 [2024-11-20 09:36:34.806206] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:24:09.672 [2024-11-20 09:36:34.806336] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78283 ] 00:24:09.672 [2024-11-20 09:36:34.963197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.672 [2024-11-20 09:36:35.062564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.571  [2024-11-20T09:36:37.284Z] Copying: 677/1024 [MB] (677 MBps) [2024-11-20T09:36:38.217Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:24:12.761 00:24:12.761 09:36:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:12.761 09:36:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b6c2c64cde5af66220394f89fd8e5b94 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b6c2c64cde5af66220394f89fd8e5b94 != \b\6\c\2\c\6\4\c\d\e\5\a\f\6\6\2\2\0\3\9\4\f\8\9\f\d\8\e\5\b\9\4 ]] 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78194 ]] 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78194 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78194 ']' 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 78194 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.284 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78194 00:24:15.284 killing process with pid 78194 00:24:15.285 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.285 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.285 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78194' 00:24:15.285 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 78194 00:24:15.285 09:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 78194 00:24:15.542 [2024-11-20 09:36:40.793620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:15.542 [2024-11-20 09:36:40.803602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.542 [2024-11-20 09:36:40.803636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:15.542 [2024-11-20 09:36:40.803646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:15.542 [2024-11-20 09:36:40.803653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.542 [2024-11-20 09:36:40.803670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:15.542 [2024-11-20 09:36:40.805720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.542 [2024-11-20 09:36:40.805742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:15.542 [2024-11-20 09:36:40.805750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.039 ms 00:24:15.542 [2024-11-20 09:36:40.805760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.542 [2024-11-20 09:36:40.805954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.542 [2024-11-20 09:36:40.805962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:15.542 [2024-11-20 09:36:40.805968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:24:15.542 [2024-11-20 09:36:40.805974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.542 [2024-11-20 09:36:40.807077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.542 [2024-11-20 09:36:40.807210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:15.542 [2024-11-20 09:36:40.807222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:24:15.542 [2024-11-20 09:36:40.807229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.542 [2024-11-20 09:36:40.808114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.542 [2024-11-20 09:36:40.808127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:15.542 [2024-11-20 09:36:40.808134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:24:15.542 [2024-11-20 09:36:40.808140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.542 [2024-11-20 09:36:40.815428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.542 [2024-11-20 09:36:40.815454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:15.542 [2024-11-20 09:36:40.815462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.261 ms 00:24:15.543 [2024-11-20 09:36:40.815468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.819500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.819524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:15.543 [2024-11-20 09:36:40.819533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.002 ms 00:24:15.543 [2024-11-20 09:36:40.819539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.819608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.819616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:15.543 [2024-11-20 09:36:40.819623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:24:15.543 [2024-11-20 09:36:40.819629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.826926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.826949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:24:15.543 [2024-11-20 09:36:40.826957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.280 ms 00:24:15.543 [2024-11-20 09:36:40.826962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.833840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.833943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:24:15.543 [2024-11-20 09:36:40.833955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.851 ms 00:24:15.543 [2024-11-20 09:36:40.833961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.841319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.841404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:15.543 [2024-11-20 09:36:40.841454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.334 ms 00:24:15.543 [2024-11-20 09:36:40.841472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.848594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.848731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:15.543 [2024-11-20 09:36:40.848777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.066 ms 00:24:15.543 [2024-11-20 09:36:40.848795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.848828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:15.543 [2024-11-20 09:36:40.848850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:15.543 [2024-11-20 09:36:40.848876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:15.543 [2024-11-20 09:36:40.848899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:15.543 [2024-11-20 09:36:40.848923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:15.543 [2024-11-20 09:36:40.849497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:15.543 [2024-11-20 09:36:40.849513] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 82095c67-4430-4aad-a76e-942d979bfff9 00:24:15.543 [2024-11-20 09:36:40.849537] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:15.543 [2024-11-20 09:36:40.849552] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:24:15.543 [2024-11-20 09:36:40.849592] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:24:15.543 [2024-11-20 09:36:40.849610] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:24:15.543 [2024-11-20 09:36:40.849625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:15.543 [2024-11-20 09:36:40.849641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:15.543 [2024-11-20 09:36:40.849655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:15.543 [2024-11-20 09:36:40.849671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:15.543 [2024-11-20 09:36:40.849685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:15.543 [2024-11-20 09:36:40.849700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.849715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:15.543 [2024-11-20 09:36:40.849759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.873 ms 00:24:15.543 [2024-11-20 09:36:40.849777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.859690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.859774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:15.543 [2024-11-20 09:36:40.859860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.887 ms 00:24:15.543 [2024-11-20 09:36:40.859878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.860161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:15.543 [2024-11-20 09:36:40.860181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:15.543 [2024-11-20 09:36:40.860225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:24:15.543 [2024-11-20 09:36:40.860242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.893484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.543 [2024-11-20 09:36:40.893588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:15.543 [2024-11-20 09:36:40.893626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.543 [2024-11-20 09:36:40.893643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.893680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.543 [2024-11-20 09:36:40.893696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:15.543 [2024-11-20 09:36:40.893711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.543 [2024-11-20 09:36:40.893725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.893803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.543 [2024-11-20 09:36:40.893823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:15.543 [2024-11-20 09:36:40.893839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.543 [2024-11-20 09:36:40.893895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.893922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.543 [2024-11-20 09:36:40.893968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:15.543 [2024-11-20 09:36:40.893985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.543 [2024-11-20 09:36:40.894037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.543 [2024-11-20 09:36:40.954656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.543 [2024-11-20 09:36:40.954788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:15.543 [2024-11-20 09:36:40.954826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.543 [2024-11-20 09:36:40.954844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.004083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.004209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:15.801 [2024-11-20 09:36:41.004247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.004265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.005208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.005289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:15.801 [2024-11-20 09:36:41.005345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.005363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.005427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.005522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:15.801 [2024-11-20 09:36:41.005544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.005564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.005647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.005771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:15.801 [2024-11-20 09:36:41.005789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.005804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.005842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.005859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:15.801 [2024-11-20 09:36:41.005873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.005921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.005961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.005978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:15.801 [2024-11-20 09:36:41.005993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.006007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.006047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:15.801 [2024-11-20 09:36:41.006098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:15.801 [2024-11-20 09:36:41.006115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:15.801 [2024-11-20 09:36:41.006130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:15.801 [2024-11-20 09:36:41.006226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 202.603 ms, result 0 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:16.366 Remove shared memory files 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid77975 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:16.366 ************************************ 00:24:16.366 END TEST ftl_upgrade_shutdown 00:24:16.366 ************************************ 00:24:16.366 00:24:16.366 real 1m17.298s 00:24:16.366 user 1m50.128s 00:24:16.366 sys 0m18.081s 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.366 09:36:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@14 -- # killprocess 72496 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 72496 ']' 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@958 -- # kill -0 72496 00:24:16.366 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72496) - No such process 00:24:16.366 Process with pid 72496 is not found 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72496 is not found' 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=78391 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@20 -- # waitforlisten 78391 00:24:16.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@835 -- # '[' -z 78391 ']' 00:24:16.366 09:36:41 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.366 09:36:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:16.366 [2024-11-20 09:36:41.756736] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:24:16.366 [2024-11-20 09:36:41.756994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78391 ] 00:24:16.624 [2024-11-20 09:36:41.914441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.624 [2024-11-20 09:36:42.013323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.188 09:36:42 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.188 09:36:42 ftl -- common/autotest_common.sh@868 -- # return 0 00:24:17.188 09:36:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:17.492 nvme0n1 00:24:17.492 09:36:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:24:17.492 09:36:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:17.492 09:36:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:17.759 09:36:43 ftl -- ftl/common.sh@28 -- # stores=2122fb04-8064-4cf5-b5c1-d1513cae5689 00:24:17.759 09:36:43 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:24:17.759 09:36:43 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2122fb04-8064-4cf5-b5c1-d1513cae5689 00:24:18.017 09:36:43 ftl -- ftl/ftl.sh@23 -- # killprocess 78391 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@954 -- # '[' -z 78391 ']' 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@958 -- # kill -0 78391 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@959 -- # uname 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78391 00:24:18.017 killing process with pid 78391 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78391' 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@973 -- # kill 78391 00:24:18.017 09:36:43 ftl -- common/autotest_common.sh@978 -- # wait 78391 00:24:19.387 09:36:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:19.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:19.644 Waiting for block devices as requested 00:24:19.644 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:19.644 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:19.901 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:19.901 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:25.187 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:25.187 09:36:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:24:25.187 Remove shared memory files 00:24:25.187 09:36:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:25.187 09:36:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:24:25.187 09:36:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:24:25.187 09:36:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:24:25.187 09:36:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:25.187 09:36:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:24:25.187 ************************************ 00:24:25.187 END TEST ftl 00:24:25.187 ************************************ 00:24:25.187 00:24:25.187 real 8m13.093s 00:24:25.187 user 10m42.847s 00:24:25.187 sys 1m3.118s 00:24:25.187 09:36:50 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.187 09:36:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:25.187 09:36:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:25.187 09:36:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:25.187 09:36:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:25.187 09:36:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:25.187 09:36:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:25.187 09:36:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:25.187 09:36:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:25.187 09:36:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:25.187 09:36:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:25.187 09:36:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:25.187 09:36:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.187 09:36:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.187 09:36:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:25.187 09:36:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:25.187 09:36:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:25.187 09:36:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.559 INFO: APP EXITING 00:24:26.559 INFO: killing all VMs 00:24:26.559 INFO: killing vhost app 00:24:26.559 INFO: EXIT DONE 00:24:26.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:26.817 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:26.817 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:26.817 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:24:26.817 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:24:27.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:27.641 Cleaning 00:24:27.641 Removing: /var/run/dpdk/spdk0/config 00:24:27.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:27.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:27.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:27.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:27.641 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:27.641 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:27.641 Removing: /var/run/dpdk/spdk0 00:24:27.641 Removing: /var/run/dpdk/spdk_pid56957 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57153 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57366 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57463 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57498 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57615 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57633 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57827 00:24:27.641 Removing: /var/run/dpdk/spdk_pid57920 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58016 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58125 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58217 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58258 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58289 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58365 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58449 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58878 00:24:27.641 Removing: /var/run/dpdk/spdk_pid58938 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59001 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59017 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59119 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59135 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59226 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59242 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59295 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59313 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59366 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59384 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59544 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59581 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59664 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59842 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59926 00:24:27.641 Removing: /var/run/dpdk/spdk_pid59962 00:24:27.641 Removing: /var/run/dpdk/spdk_pid60402 00:24:27.641 Removing: /var/run/dpdk/spdk_pid60502 00:24:27.641 Removing: /var/run/dpdk/spdk_pid60612 00:24:27.641 Removing: /var/run/dpdk/spdk_pid60667 00:24:27.641 Removing: /var/run/dpdk/spdk_pid60694 00:24:27.641 Removing: /var/run/dpdk/spdk_pid60771 00:24:27.641 Removing: /var/run/dpdk/spdk_pid61397 00:24:27.642 Removing: /var/run/dpdk/spdk_pid61439 00:24:27.642 Removing: /var/run/dpdk/spdk_pid61936 00:24:27.642 Removing: /var/run/dpdk/spdk_pid62034 00:24:27.642 Removing: /var/run/dpdk/spdk_pid62149 00:24:27.642 Removing: /var/run/dpdk/spdk_pid62202 00:24:27.642 Removing: /var/run/dpdk/spdk_pid62228 00:24:27.642 Removing: /var/run/dpdk/spdk_pid62253 00:24:27.642 Removing: /var/run/dpdk/spdk_pid64106 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64237 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64247 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64259 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64304 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64308 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64320 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64366 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64370 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64382 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64427 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64431 00:24:27.899 Removing: /var/run/dpdk/spdk_pid64443 00:24:27.899 Removing: /var/run/dpdk/spdk_pid65802 00:24:27.899 Removing: /var/run/dpdk/spdk_pid65906 00:24:27.899 Removing: /var/run/dpdk/spdk_pid67304 00:24:27.899 Removing: /var/run/dpdk/spdk_pid68706 00:24:27.900 Removing: /var/run/dpdk/spdk_pid68804 00:24:27.900 Removing: /var/run/dpdk/spdk_pid68891 00:24:27.900 Removing: /var/run/dpdk/spdk_pid68973 00:24:27.900 Removing: /var/run/dpdk/spdk_pid69082 00:24:27.900 Removing: /var/run/dpdk/spdk_pid69152 00:24:27.900 Removing: /var/run/dpdk/spdk_pid69296 00:24:27.900 Removing: /var/run/dpdk/spdk_pid69658 00:24:27.900 Removing: /var/run/dpdk/spdk_pid69696 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70144 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70328 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70422 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70544 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70597 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70617 00:24:27.900 Removing: /var/run/dpdk/spdk_pid70933 00:24:27.900 Removing: /var/run/dpdk/spdk_pid71079 00:24:27.900 Removing: /var/run/dpdk/spdk_pid71157 00:24:27.900 Removing: /var/run/dpdk/spdk_pid71547 00:24:27.900 Removing: /var/run/dpdk/spdk_pid71691 00:24:27.900 Removing: /var/run/dpdk/spdk_pid72496 00:24:27.900 Removing: /var/run/dpdk/spdk_pid72634 00:24:27.900 Removing: /var/run/dpdk/spdk_pid72817 00:24:27.900 Removing: /var/run/dpdk/spdk_pid72909 00:24:27.900 Removing: /var/run/dpdk/spdk_pid73210 00:24:27.900 Removing: /var/run/dpdk/spdk_pid73456 00:24:27.900 Removing: /var/run/dpdk/spdk_pid73786 00:24:27.900 Removing: /var/run/dpdk/spdk_pid73968 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74087 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74134 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74233 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74258 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74305 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74468 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74686 00:24:27.900 Removing: /var/run/dpdk/spdk_pid74939 00:24:27.900 Removing: /var/run/dpdk/spdk_pid75247 00:24:27.900 Removing: /var/run/dpdk/spdk_pid75509 00:24:27.900 Removing: /var/run/dpdk/spdk_pid75853 00:24:27.900 Removing: /var/run/dpdk/spdk_pid75984 00:24:27.900 Removing: /var/run/dpdk/spdk_pid76073 00:24:27.900 Removing: /var/run/dpdk/spdk_pid76441 00:24:27.900 Removing: /var/run/dpdk/spdk_pid76500 00:24:27.900 Removing: /var/run/dpdk/spdk_pid76802 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77078 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77422 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77533 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77580 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77636 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77686 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77761 00:24:27.900 Removing: /var/run/dpdk/spdk_pid77975 00:24:27.900 Removing: /var/run/dpdk/spdk_pid78049 00:24:27.900 Removing: /var/run/dpdk/spdk_pid78111 00:24:27.900 Removing: /var/run/dpdk/spdk_pid78194 00:24:27.900 Removing: /var/run/dpdk/spdk_pid78223 00:24:27.900 Removing: /var/run/dpdk/spdk_pid78283 00:24:27.900 Removing: /var/run/dpdk/spdk_pid78391 00:24:27.900 Clean 00:24:27.900 09:36:53 -- common/autotest_common.sh@1453 -- # return 0 00:24:27.900 09:36:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:27.900 09:36:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.900 09:36:53 -- common/autotest_common.sh@10 -- # set +x 00:24:28.157 09:36:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:28.158 09:36:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.158 09:36:53 -- common/autotest_common.sh@10 -- # set +x 00:24:28.158 09:36:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:28.158 09:36:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:28.158 09:36:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:28.158 09:36:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:28.158 09:36:53 -- spdk/autotest.sh@398 -- # hostname 00:24:28.158 09:36:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:28.158 geninfo: WARNING: invalid characters removed from testname! 00:24:54.678 09:37:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:54.678 09:37:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:56.571 09:37:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:59.093 09:37:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:02.369 09:37:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:04.265 09:37:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:06.794 09:37:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:06.794 09:37:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:06.794 09:37:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:06.794 09:37:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:06.794 09:37:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:06.794 09:37:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:07.052 + [[ -n 5019 ]] 00:25:07.052 + sudo kill 5019 00:25:07.060 [Pipeline] } 00:25:07.075 [Pipeline] // timeout 00:25:07.079 [Pipeline] } 00:25:07.093 [Pipeline] // stage 00:25:07.099 [Pipeline] } 00:25:07.114 [Pipeline] // catchError 00:25:07.122 [Pipeline] stage 00:25:07.124 [Pipeline] { (Stop VM) 00:25:07.166 [Pipeline] sh 00:25:07.460 + vagrant halt 00:25:09.987 ==> default: Halting domain... 00:25:14.177 [Pipeline] sh 00:25:14.452 + vagrant destroy -f 00:25:16.985 ==> default: Removing domain... 00:25:17.570 [Pipeline] sh 00:25:17.854 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:25:17.861 [Pipeline] } 00:25:17.878 [Pipeline] // stage 00:25:17.884 [Pipeline] } 00:25:17.900 [Pipeline] // dir 00:25:17.905 [Pipeline] } 00:25:17.919 [Pipeline] // wrap 00:25:17.924 [Pipeline] } 00:25:17.936 [Pipeline] // catchError 00:25:17.945 [Pipeline] stage 00:25:17.947 [Pipeline] { (Epilogue) 00:25:17.960 [Pipeline] sh 00:25:18.237 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:24.825 [Pipeline] catchError 00:25:24.827 [Pipeline] { 00:25:24.838 [Pipeline] sh 00:25:25.114 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:25.114 Artifacts sizes are good 00:25:25.122 [Pipeline] } 00:25:25.135 [Pipeline] // catchError 00:25:25.145 [Pipeline] archiveArtifacts 00:25:25.151 Archiving artifacts 00:25:25.302 [Pipeline] cleanWs 00:25:25.314 [WS-CLEANUP] Deleting project workspace... 00:25:25.314 [WS-CLEANUP] Deferred wipeout is used... 00:25:25.319 [WS-CLEANUP] done 00:25:25.321 [Pipeline] } 00:25:25.335 [Pipeline] // stage 00:25:25.340 [Pipeline] } 00:25:25.353 [Pipeline] // node 00:25:25.358 [Pipeline] End of Pipeline 00:25:25.390 Finished: SUCCESS